00:00:00.001 Started by upstream project "autotest-per-patch" build number 126232 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.152 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.187 Using shallow fetch with depth 1 00:00:00.187 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.187 > git --version # timeout=10 00:00:00.223 > git --version # 'git version 2.39.2' 00:00:00.223 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.246 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.246 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.573 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.584 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.595 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.595 > git config core.sparsecheckout # timeout=10 00:00:05.605 > git read-tree -mu HEAD # timeout=10 00:00:05.623 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.642 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.643 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.728 [Pipeline] Start of Pipeline 00:00:05.743 [Pipeline] library 00:00:05.745 Loading library shm_lib@master 00:00:06.884 Library shm_lib@master is cached. Copying from home. 00:00:06.910 [Pipeline] node 00:00:06.982 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.985 [Pipeline] { 00:00:07.001 [Pipeline] catchError 00:00:07.004 [Pipeline] { 00:00:07.041 [Pipeline] wrap 00:00:07.056 [Pipeline] { 00:00:07.066 [Pipeline] stage 00:00:07.070 [Pipeline] { (Prologue) 00:00:07.251 [Pipeline] sh 00:00:07.537 + logger -p user.info -t JENKINS-CI 00:00:07.554 [Pipeline] echo 00:00:07.556 Node: WFP8 00:00:07.564 [Pipeline] sh 00:00:07.853 [Pipeline] setCustomBuildProperty 00:00:07.862 [Pipeline] echo 00:00:07.864 Cleanup processes 00:00:07.868 [Pipeline] sh 00:00:08.151 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.151 2397285 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.165 [Pipeline] sh 00:00:08.461 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.462 ++ grep -v 'sudo pgrep' 00:00:08.462 ++ awk '{print $1}' 00:00:08.462 + sudo kill -9 00:00:08.462 + true 00:00:08.477 [Pipeline] cleanWs 00:00:08.486 [WS-CLEANUP] Deleting project workspace... 00:00:08.486 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.494 [WS-CLEANUP] done 00:00:08.498 [Pipeline] setCustomBuildProperty 00:00:08.511 [Pipeline] sh 00:00:08.793 + sudo git config --global --replace-all safe.directory '*' 00:00:08.885 [Pipeline] httpRequest 00:00:08.909 [Pipeline] echo 00:00:08.911 Sorcerer 10.211.164.101 is alive 00:00:08.920 [Pipeline] httpRequest 00:00:08.924 HttpMethod: GET 00:00:08.924 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.925 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.938 Response Code: HTTP/1.1 200 OK 00:00:08.939 Success: Status code 200 is in the accepted range: 200,404 00:00:08.945 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:16.840 [Pipeline] sh 00:00:17.126 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:17.144 [Pipeline] httpRequest 00:00:17.180 [Pipeline] echo 00:00:17.182 Sorcerer 10.211.164.101 is alive 00:00:17.192 [Pipeline] httpRequest 00:00:17.198 HttpMethod: GET 00:00:17.198 URL: http://10.211.164.101/packages/spdk_f604975bacc64af9a6a88b4ef3871bde511bf6f2.tar.gz 00:00:17.199 Sending request to url: http://10.211.164.101/packages/spdk_f604975bacc64af9a6a88b4ef3871bde511bf6f2.tar.gz 00:00:17.223 Response Code: HTTP/1.1 200 OK 00:00:17.224 Success: Status code 200 is in the accepted range: 200,404 00:00:17.224 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f604975bacc64af9a6a88b4ef3871bde511bf6f2.tar.gz 00:00:54.845 [Pipeline] sh 00:00:55.153 + tar --no-same-owner -xf spdk_f604975bacc64af9a6a88b4ef3871bde511bf6f2.tar.gz 00:00:57.704 [Pipeline] sh 00:00:57.989 + git -C spdk log --oneline -n5 00:00:57.989 f604975ba doc: fix deprecation.md typo 00:00:57.989 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:00:57.989 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:00:57.989 2d30d9f83 accel: introduce tasks in sequence limit 00:00:57.989 2728651ee accel: adjust task per ch define name 00:00:58.005 [Pipeline] } 00:00:58.022 [Pipeline] // stage 00:00:58.032 [Pipeline] stage 00:00:58.035 [Pipeline] { (Prepare) 00:00:58.057 [Pipeline] writeFile 00:00:58.078 [Pipeline] sh 00:00:58.362 + logger -p user.info -t JENKINS-CI 00:00:58.378 [Pipeline] sh 00:00:58.662 + logger -p user.info -t JENKINS-CI 00:00:58.677 [Pipeline] sh 00:00:58.960 + cat autorun-spdk.conf 00:00:58.960 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.960 SPDK_TEST_NVMF=1 00:00:58.960 SPDK_TEST_NVME_CLI=1 00:00:58.960 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.960 SPDK_TEST_NVMF_NICS=e810 00:00:58.960 SPDK_TEST_VFIOUSER=1 00:00:58.960 SPDK_RUN_UBSAN=1 00:00:58.960 NET_TYPE=phy 00:00:58.967 RUN_NIGHTLY=0 00:00:58.971 [Pipeline] readFile 00:00:58.993 [Pipeline] withEnv 00:00:58.995 [Pipeline] { 00:00:59.007 [Pipeline] sh 00:00:59.289 + set -ex 00:00:59.289 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:59.289 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.289 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.289 ++ SPDK_TEST_NVMF=1 00:00:59.289 ++ SPDK_TEST_NVME_CLI=1 00:00:59.289 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.289 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.289 ++ SPDK_TEST_VFIOUSER=1 00:00:59.289 ++ SPDK_RUN_UBSAN=1 00:00:59.289 ++ NET_TYPE=phy 00:00:59.289 ++ RUN_NIGHTLY=0 00:00:59.289 + case $SPDK_TEST_NVMF_NICS in 00:00:59.289 + DRIVERS=ice 00:00:59.289 + [[ tcp == \r\d\m\a ]] 00:00:59.289 + [[ -n ice ]] 00:00:59.289 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.289 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.289 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:59.289 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.289 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.289 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.289 + true 00:00:59.289 + for D in $DRIVERS 00:00:59.289 + sudo modprobe ice 00:00:59.289 + exit 0 00:00:59.300 [Pipeline] } 00:00:59.322 [Pipeline] // withEnv 00:00:59.329 [Pipeline] } 00:00:59.347 [Pipeline] // stage 00:00:59.358 [Pipeline] catchError 00:00:59.360 [Pipeline] { 00:00:59.377 [Pipeline] timeout 00:00:59.377 Timeout set to expire in 50 min 00:00:59.379 [Pipeline] { 00:00:59.397 [Pipeline] stage 00:00:59.400 [Pipeline] { (Tests) 00:00:59.416 [Pipeline] sh 00:00:59.700 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.701 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.701 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.701 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:59.701 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.701 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.701 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:59.701 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.701 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.701 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.701 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:59.701 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.701 + source /etc/os-release 00:00:59.701 ++ NAME='Fedora Linux' 00:00:59.701 ++ VERSION='38 (Cloud Edition)' 00:00:59.701 ++ ID=fedora 00:00:59.701 ++ VERSION_ID=38 00:00:59.701 ++ VERSION_CODENAME= 00:00:59.701 ++ PLATFORM_ID=platform:f38 00:00:59.701 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:59.701 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.701 ++ LOGO=fedora-logo-icon 00:00:59.701 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:59.701 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.701 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:59.701 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.701 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.701 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.701 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:59.701 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.701 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:59.701 ++ SUPPORT_END=2024-05-14 00:00:59.701 ++ VARIANT='Cloud Edition' 00:00:59.701 ++ VARIANT_ID=cloud 00:00:59.701 + uname -a 00:00:59.701 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:59.701 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:02.242 Hugepages 00:01:02.242 node hugesize free / total 00:01:02.242 node0 1048576kB 0 / 0 00:01:02.242 node0 2048kB 0 / 0 00:01:02.242 node1 1048576kB 0 / 0 00:01:02.242 node1 2048kB 0 / 0 00:01:02.242 00:01:02.242 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:02.242 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:02.242 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:02.242 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:02.242 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:02.242 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:02.242 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:02.242 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:02.242 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:02.242 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:02.242 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:02.242 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:02.242 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:02.242 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:02.242 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:02.242 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:02.242 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:02.242 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:02.242 + rm -f /tmp/spdk-ld-path 00:01:02.242 + source autorun-spdk.conf 00:01:02.242 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.242 ++ SPDK_TEST_NVMF=1 00:01:02.242 ++ SPDK_TEST_NVME_CLI=1 00:01:02.242 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.242 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.242 ++ SPDK_TEST_VFIOUSER=1 00:01:02.242 ++ SPDK_RUN_UBSAN=1 00:01:02.242 ++ NET_TYPE=phy 00:01:02.242 ++ RUN_NIGHTLY=0 00:01:02.242 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:02.242 + [[ -n '' ]] 00:01:02.242 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.242 + for M in /var/spdk/build-*-manifest.txt 00:01:02.242 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:02.242 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.242 + for M in /var/spdk/build-*-manifest.txt 00:01:02.242 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:02.242 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.242 ++ uname 00:01:02.242 + [[ Linux == \L\i\n\u\x ]] 00:01:02.242 + sudo dmesg -T 00:01:02.242 + sudo dmesg --clear 00:01:02.242 + dmesg_pid=2398204 00:01:02.242 + [[ Fedora Linux == FreeBSD ]] 00:01:02.242 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.242 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.242 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:02.242 + [[ -x /usr/src/fio-static/fio ]] 00:01:02.242 + export FIO_BIN=/usr/src/fio-static/fio 00:01:02.242 + FIO_BIN=/usr/src/fio-static/fio 00:01:02.242 + sudo dmesg -Tw 00:01:02.242 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:02.242 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:02.242 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:02.242 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.242 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.242 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:02.242 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.242 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.242 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.242 Test configuration: 00:01:02.242 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.242 SPDK_TEST_NVMF=1 00:01:02.242 SPDK_TEST_NVME_CLI=1 00:01:02.242 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.242 SPDK_TEST_NVMF_NICS=e810 00:01:02.242 SPDK_TEST_VFIOUSER=1 00:01:02.242 SPDK_RUN_UBSAN=1 00:01:02.242 NET_TYPE=phy 00:01:02.242 RUN_NIGHTLY=0 20:26:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:02.242 20:26:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:02.243 20:26:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:02.243 20:26:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:02.243 20:26:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.243 20:26:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.243 20:26:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.243 20:26:36 -- paths/export.sh@5 -- $ export PATH 00:01:02.243 20:26:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.243 20:26:36 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:02.243 20:26:36 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:02.243 20:26:36 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721067996.XXXXXX 00:01:02.243 20:26:36 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721067996.wfeFi3 00:01:02.243 20:26:36 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:02.243 20:26:36 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:02.243 20:26:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:02.243 20:26:36 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:02.243 20:26:36 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:02.243 20:26:36 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:02.243 20:26:36 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:02.243 20:26:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.243 20:26:36 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:02.243 20:26:36 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:02.243 20:26:36 -- pm/common@17 -- $ local monitor 00:01:02.243 20:26:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.243 20:26:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.243 20:26:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.243 20:26:36 -- pm/common@21 -- $ date +%s 00:01:02.243 20:26:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.243 20:26:36 -- pm/common@21 -- $ date +%s 00:01:02.243 20:26:36 -- pm/common@25 -- $ sleep 1 00:01:02.243 20:26:36 -- pm/common@21 -- $ date +%s 00:01:02.243 20:26:36 -- pm/common@21 -- $ date +%s 00:01:02.243 20:26:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067996 00:01:02.243 20:26:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067996 00:01:02.243 20:26:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067996 00:01:02.243 20:26:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067996 00:01:02.243 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067996_collect-vmstat.pm.log 00:01:02.243 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067996_collect-cpu-load.pm.log 00:01:02.243 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067996_collect-cpu-temp.pm.log 00:01:02.243 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067996_collect-bmc-pm.bmc.pm.log 00:01:03.220 20:26:37 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:03.220 20:26:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:03.220 20:26:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:03.220 20:26:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.220 20:26:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:03.220 Mon Jul 15 06:26:37 PM UTC 2024 00:01:03.220 20:26:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:03.220 v24.09-pre-210-gf604975ba 00:01:03.220 20:26:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:03.220 20:26:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:03.220 20:26:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:03.220 20:26:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:03.220 20:26:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:03.220 20:26:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.220 ************************************ 00:01:03.220 START TEST ubsan 00:01:03.220 ************************************ 00:01:03.220 20:26:37 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:03.220 using ubsan 00:01:03.220 00:01:03.220 real 0m0.000s 00:01:03.220 user 0m0.000s 00:01:03.220 sys 0m0.000s 00:01:03.220 20:26:37 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:03.220 20:26:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:03.220 ************************************ 00:01:03.220 END TEST ubsan 00:01:03.220 ************************************ 00:01:03.220 20:26:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:03.220 20:26:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:03.220 20:26:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:03.220 20:26:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:03.220 20:26:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:03.220 20:26:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:03.220 20:26:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:03.220 20:26:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:03.220 20:26:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:03.220 20:26:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:03.479 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:03.479 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.737 Using 'verbs' RDMA provider 00:01:16.885 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:26.864 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:26.864 Creating mk/config.mk...done. 00:01:26.864 Creating mk/cc.flags.mk...done. 00:01:26.864 Type 'make' to build. 00:01:26.864 20:27:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:26.864 20:27:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:26.864 20:27:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.864 20:27:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.864 ************************************ 00:01:26.864 START TEST make 00:01:26.864 ************************************ 00:01:26.864 20:27:01 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:27.123 make[1]: Nothing to be done for 'all'. 00:01:28.510 The Meson build system 00:01:28.511 Version: 1.3.1 00:01:28.511 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:28.511 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.511 Build type: native build 00:01:28.511 Project name: libvfio-user 00:01:28.511 Project version: 0.0.1 00:01:28.511 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:28.511 C linker for the host machine: cc ld.bfd 2.39-16 00:01:28.511 Host machine cpu family: x86_64 00:01:28.511 Host machine cpu: x86_64 00:01:28.511 Run-time dependency threads found: YES 00:01:28.511 Library dl found: YES 00:01:28.511 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:28.511 Run-time dependency json-c found: YES 0.17 00:01:28.511 Run-time dependency cmocka found: YES 1.1.7 00:01:28.511 Program pytest-3 found: NO 00:01:28.511 Program flake8 found: NO 00:01:28.511 Program misspell-fixer found: NO 00:01:28.511 Program restructuredtext-lint found: NO 00:01:28.511 Program valgrind found: YES (/usr/bin/valgrind) 00:01:28.511 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.511 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.511 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.511 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:28.511 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:28.511 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:28.511 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:28.511 Build targets in project: 8 00:01:28.511 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:28.511 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:28.511 00:01:28.511 libvfio-user 0.0.1 00:01:28.511 00:01:28.511 User defined options 00:01:28.511 buildtype : debug 00:01:28.511 default_library: shared 00:01:28.511 libdir : /usr/local/lib 00:01:28.511 00:01:28.511 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.769 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.769 [1/37] Compiling C object samples/null.p/null.c.o 00:01:28.769 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:28.769 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:28.769 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:28.769 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:28.769 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:28.769 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:28.769 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:28.769 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:28.769 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:28.769 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:28.769 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:28.769 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:28.769 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:28.769 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:28.769 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:28.769 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:28.769 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:28.769 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:28.769 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:28.769 [21/37] Compiling C object samples/server.p/server.c.o 00:01:28.769 [22/37] Compiling C object samples/client.p/client.c.o 00:01:28.769 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:28.769 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:28.769 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:28.769 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:29.027 [27/37] Linking target samples/client 00:01:29.027 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:29.027 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:29.027 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:29.027 [31/37] Linking target test/unit_tests 00:01:29.027 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:29.285 [33/37] Linking target samples/server 00:01:29.285 [34/37] Linking target samples/lspci 00:01:29.285 [35/37] Linking target samples/gpio-pci-idio-16 00:01:29.285 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:29.285 [37/37] Linking target samples/null 00:01:29.285 INFO: autodetecting backend as ninja 00:01:29.285 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.285 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.544 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:29.544 ninja: no work to do. 00:01:34.816 The Meson build system 00:01:34.816 Version: 1.3.1 00:01:34.816 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:34.816 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:34.816 Build type: native build 00:01:34.816 Program cat found: YES (/usr/bin/cat) 00:01:34.816 Project name: DPDK 00:01:34.816 Project version: 24.03.0 00:01:34.816 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:34.816 C linker for the host machine: cc ld.bfd 2.39-16 00:01:34.816 Host machine cpu family: x86_64 00:01:34.816 Host machine cpu: x86_64 00:01:34.816 Message: ## Building in Developer Mode ## 00:01:34.816 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:34.816 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:34.816 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:34.816 Program python3 found: YES (/usr/bin/python3) 00:01:34.816 Program cat found: YES (/usr/bin/cat) 00:01:34.816 Compiler for C supports arguments -march=native: YES 00:01:34.816 Checking for size of "void *" : 8 00:01:34.816 Checking for size of "void *" : 8 (cached) 00:01:34.816 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:34.816 Library m found: YES 00:01:34.816 Library numa found: YES 00:01:34.816 Has header "numaif.h" : YES 00:01:34.816 Library fdt found: NO 00:01:34.816 Library execinfo found: NO 00:01:34.816 Has header "execinfo.h" : YES 00:01:34.816 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:34.816 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:34.816 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:34.816 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:34.816 Run-time dependency openssl found: YES 3.0.9 00:01:34.816 Run-time dependency libpcap found: YES 1.10.4 00:01:34.816 Has header "pcap.h" with dependency libpcap: YES 00:01:34.816 Compiler for C supports arguments -Wcast-qual: YES 00:01:34.816 Compiler for C supports arguments -Wdeprecated: YES 00:01:34.816 Compiler for C supports arguments -Wformat: YES 00:01:34.816 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:34.816 Compiler for C supports arguments -Wformat-security: NO 00:01:34.816 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.816 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:34.816 Compiler for C supports arguments -Wnested-externs: YES 00:01:34.816 Compiler for C supports arguments -Wold-style-definition: YES 00:01:34.816 Compiler for C supports arguments -Wpointer-arith: YES 00:01:34.816 Compiler for C supports arguments -Wsign-compare: YES 00:01:34.816 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:34.816 Compiler for C supports arguments -Wundef: YES 00:01:34.816 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.816 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:34.816 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:34.816 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.816 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:34.816 Program objdump found: YES (/usr/bin/objdump) 00:01:34.816 Compiler for C supports arguments -mavx512f: YES 00:01:34.816 Checking if "AVX512 checking" compiles: YES 00:01:34.816 Fetching value of define "__SSE4_2__" : 1 00:01:34.816 Fetching value of define "__AES__" : 1 00:01:34.816 Fetching value of define "__AVX__" : 1 00:01:34.816 Fetching value of define "__AVX2__" : 1 00:01:34.816 Fetching value of define "__AVX512BW__" : 1 00:01:34.816 Fetching value of define "__AVX512CD__" : 1 00:01:34.816 Fetching value of define "__AVX512DQ__" : 1 00:01:34.816 Fetching value of define "__AVX512F__" : 1 00:01:34.816 Fetching value of define "__AVX512VL__" : 1 00:01:34.816 Fetching value of define "__PCLMUL__" : 1 00:01:34.816 Fetching value of define "__RDRND__" : 1 00:01:34.816 Fetching value of define "__RDSEED__" : 1 00:01:34.816 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:34.816 Fetching value of define "__znver1__" : (undefined) 00:01:34.816 Fetching value of define "__znver2__" : (undefined) 00:01:34.816 Fetching value of define "__znver3__" : (undefined) 00:01:34.816 Fetching value of define "__znver4__" : (undefined) 00:01:34.816 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:34.816 Message: lib/log: Defining dependency "log" 00:01:34.816 Message: lib/kvargs: Defining dependency "kvargs" 00:01:34.816 Message: lib/telemetry: Defining dependency "telemetry" 00:01:34.816 Checking for function "getentropy" : NO 00:01:34.816 Message: lib/eal: Defining dependency "eal" 00:01:34.816 Message: lib/ring: Defining dependency "ring" 00:01:34.816 Message: lib/rcu: Defining dependency "rcu" 00:01:34.816 Message: lib/mempool: Defining dependency "mempool" 00:01:34.816 Message: lib/mbuf: Defining dependency "mbuf" 00:01:34.816 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:34.816 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:34.816 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:34.816 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:34.816 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:34.817 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:34.817 Compiler for C supports arguments -mpclmul: YES 00:01:34.817 Compiler for C supports arguments -maes: YES 00:01:34.817 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:34.817 Compiler for C supports arguments -mavx512bw: YES 00:01:34.817 Compiler for C supports arguments -mavx512dq: YES 00:01:34.817 Compiler for C supports arguments -mavx512vl: YES 00:01:34.817 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:34.817 Compiler for C supports arguments -mavx2: YES 00:01:34.817 Compiler for C supports arguments -mavx: YES 00:01:34.817 Message: lib/net: Defining dependency "net" 00:01:34.817 Message: lib/meter: Defining dependency "meter" 00:01:34.817 Message: lib/ethdev: Defining dependency "ethdev" 00:01:34.817 Message: lib/pci: Defining dependency "pci" 00:01:34.817 Message: lib/cmdline: Defining dependency "cmdline" 00:01:34.817 Message: lib/hash: Defining dependency "hash" 00:01:34.817 Message: lib/timer: Defining dependency "timer" 00:01:34.817 Message: lib/compressdev: Defining dependency "compressdev" 00:01:34.817 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:34.817 Message: lib/dmadev: Defining dependency "dmadev" 00:01:34.817 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:34.817 Message: lib/power: Defining dependency "power" 00:01:34.817 Message: lib/reorder: Defining dependency "reorder" 00:01:34.817 Message: lib/security: Defining dependency "security" 00:01:34.817 Has header "linux/userfaultfd.h" : YES 00:01:34.817 Has header "linux/vduse.h" : YES 00:01:34.817 Message: lib/vhost: Defining dependency "vhost" 00:01:34.817 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:34.817 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:34.817 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:34.817 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:34.817 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:34.817 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:34.817 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:34.817 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:34.817 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:34.817 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:34.817 Program doxygen found: YES (/usr/bin/doxygen) 00:01:34.817 Configuring doxy-api-html.conf using configuration 00:01:34.817 Configuring doxy-api-man.conf using configuration 00:01:34.817 Program mandb found: YES (/usr/bin/mandb) 00:01:34.817 Program sphinx-build found: NO 00:01:34.817 Configuring rte_build_config.h using configuration 00:01:34.817 Message: 00:01:34.817 ================= 00:01:34.817 Applications Enabled 00:01:34.817 ================= 00:01:34.817 00:01:34.817 apps: 00:01:34.817 00:01:34.817 00:01:34.817 Message: 00:01:34.817 ================= 00:01:34.817 Libraries Enabled 00:01:34.817 ================= 00:01:34.817 00:01:34.817 libs: 00:01:34.817 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:34.817 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:34.817 cryptodev, dmadev, power, reorder, security, vhost, 00:01:34.817 00:01:34.817 Message: 00:01:34.817 =============== 00:01:34.817 Drivers Enabled 00:01:34.817 =============== 00:01:34.817 00:01:34.817 common: 00:01:34.817 00:01:34.817 bus: 00:01:34.817 pci, vdev, 00:01:34.817 mempool: 00:01:34.817 ring, 00:01:34.817 dma: 00:01:34.817 00:01:34.817 net: 00:01:34.817 00:01:34.817 crypto: 00:01:34.817 00:01:34.817 compress: 00:01:34.817 00:01:34.817 vdpa: 00:01:34.817 00:01:34.817 00:01:34.817 Message: 00:01:34.817 ================= 00:01:34.817 Content Skipped 00:01:34.817 ================= 00:01:34.817 00:01:34.817 apps: 00:01:34.817 dumpcap: explicitly disabled via build config 00:01:34.817 graph: explicitly disabled via build config 00:01:34.817 pdump: explicitly disabled via build config 00:01:34.817 proc-info: explicitly disabled via build config 00:01:34.817 test-acl: explicitly disabled via build config 00:01:34.817 test-bbdev: explicitly disabled via build config 00:01:34.817 test-cmdline: explicitly disabled via build config 00:01:34.817 test-compress-perf: explicitly disabled via build config 00:01:34.817 test-crypto-perf: explicitly disabled via build config 00:01:34.817 test-dma-perf: explicitly disabled via build config 00:01:34.817 test-eventdev: explicitly disabled via build config 00:01:34.817 test-fib: explicitly disabled via build config 00:01:34.817 test-flow-perf: explicitly disabled via build config 00:01:34.817 test-gpudev: explicitly disabled via build config 00:01:34.817 test-mldev: explicitly disabled via build config 00:01:34.817 test-pipeline: explicitly disabled via build config 00:01:34.817 test-pmd: explicitly disabled via build config 00:01:34.817 test-regex: explicitly disabled via build config 00:01:34.817 test-sad: explicitly disabled via build config 00:01:34.817 test-security-perf: explicitly disabled via build config 00:01:34.817 00:01:34.817 libs: 00:01:34.817 argparse: explicitly disabled via build config 00:01:34.817 metrics: explicitly disabled via build config 00:01:34.817 acl: explicitly disabled via build config 00:01:34.817 bbdev: explicitly disabled via build config 00:01:34.817 bitratestats: explicitly disabled via build config 00:01:34.817 bpf: explicitly disabled via build config 00:01:34.817 cfgfile: explicitly disabled via build config 00:01:34.817 distributor: explicitly disabled via build config 00:01:34.817 efd: explicitly disabled via build config 00:01:34.817 eventdev: explicitly disabled via build config 00:01:34.817 dispatcher: explicitly disabled via build config 00:01:34.817 gpudev: explicitly disabled via build config 00:01:34.817 gro: explicitly disabled via build config 00:01:34.817 gso: explicitly disabled via build config 00:01:34.817 ip_frag: explicitly disabled via build config 00:01:34.817 jobstats: explicitly disabled via build config 00:01:34.817 latencystats: explicitly disabled via build config 00:01:34.817 lpm: explicitly disabled via build config 00:01:34.817 member: explicitly disabled via build config 00:01:34.817 pcapng: explicitly disabled via build config 00:01:34.817 rawdev: explicitly disabled via build config 00:01:34.817 regexdev: explicitly disabled via build config 00:01:34.817 mldev: explicitly disabled via build config 00:01:34.817 rib: explicitly disabled via build config 00:01:34.817 sched: explicitly disabled via build config 00:01:34.817 stack: explicitly disabled via build config 00:01:34.817 ipsec: explicitly disabled via build config 00:01:34.817 pdcp: explicitly disabled via build config 00:01:34.817 fib: explicitly disabled via build config 00:01:34.817 port: explicitly disabled via build config 00:01:34.817 pdump: explicitly disabled via build config 00:01:34.817 table: explicitly disabled via build config 00:01:34.817 pipeline: explicitly disabled via build config 00:01:34.817 graph: explicitly disabled via build config 00:01:34.817 node: explicitly disabled via build config 00:01:34.817 00:01:34.817 drivers: 00:01:34.818 common/cpt: not in enabled drivers build config 00:01:34.818 common/dpaax: not in enabled drivers build config 00:01:34.818 common/iavf: not in enabled drivers build config 00:01:34.818 common/idpf: not in enabled drivers build config 00:01:34.818 common/ionic: not in enabled drivers build config 00:01:34.818 common/mvep: not in enabled drivers build config 00:01:34.818 common/octeontx: not in enabled drivers build config 00:01:34.818 bus/auxiliary: not in enabled drivers build config 00:01:34.818 bus/cdx: not in enabled drivers build config 00:01:34.818 bus/dpaa: not in enabled drivers build config 00:01:34.818 bus/fslmc: not in enabled drivers build config 00:01:34.818 bus/ifpga: not in enabled drivers build config 00:01:34.818 bus/platform: not in enabled drivers build config 00:01:34.818 bus/uacce: not in enabled drivers build config 00:01:34.818 bus/vmbus: not in enabled drivers build config 00:01:34.818 common/cnxk: not in enabled drivers build config 00:01:34.818 common/mlx5: not in enabled drivers build config 00:01:34.818 common/nfp: not in enabled drivers build config 00:01:34.818 common/nitrox: not in enabled drivers build config 00:01:34.818 common/qat: not in enabled drivers build config 00:01:34.818 common/sfc_efx: not in enabled drivers build config 00:01:34.818 mempool/bucket: not in enabled drivers build config 00:01:34.818 mempool/cnxk: not in enabled drivers build config 00:01:34.818 mempool/dpaa: not in enabled drivers build config 00:01:34.818 mempool/dpaa2: not in enabled drivers build config 00:01:34.818 mempool/octeontx: not in enabled drivers build config 00:01:34.818 mempool/stack: not in enabled drivers build config 00:01:34.818 dma/cnxk: not in enabled drivers build config 00:01:34.818 dma/dpaa: not in enabled drivers build config 00:01:34.818 dma/dpaa2: not in enabled drivers build config 00:01:34.818 dma/hisilicon: not in enabled drivers build config 00:01:34.818 dma/idxd: not in enabled drivers build config 00:01:34.818 dma/ioat: not in enabled drivers build config 00:01:34.818 dma/skeleton: not in enabled drivers build config 00:01:34.818 net/af_packet: not in enabled drivers build config 00:01:34.818 net/af_xdp: not in enabled drivers build config 00:01:34.818 net/ark: not in enabled drivers build config 00:01:34.818 net/atlantic: not in enabled drivers build config 00:01:34.818 net/avp: not in enabled drivers build config 00:01:34.818 net/axgbe: not in enabled drivers build config 00:01:34.818 net/bnx2x: not in enabled drivers build config 00:01:34.818 net/bnxt: not in enabled drivers build config 00:01:34.818 net/bonding: not in enabled drivers build config 00:01:34.818 net/cnxk: not in enabled drivers build config 00:01:34.818 net/cpfl: not in enabled drivers build config 00:01:34.818 net/cxgbe: not in enabled drivers build config 00:01:34.818 net/dpaa: not in enabled drivers build config 00:01:34.818 net/dpaa2: not in enabled drivers build config 00:01:34.818 net/e1000: not in enabled drivers build config 00:01:34.818 net/ena: not in enabled drivers build config 00:01:34.818 net/enetc: not in enabled drivers build config 00:01:34.818 net/enetfec: not in enabled drivers build config 00:01:34.818 net/enic: not in enabled drivers build config 00:01:34.818 net/failsafe: not in enabled drivers build config 00:01:34.818 net/fm10k: not in enabled drivers build config 00:01:34.818 net/gve: not in enabled drivers build config 00:01:34.818 net/hinic: not in enabled drivers build config 00:01:34.818 net/hns3: not in enabled drivers build config 00:01:34.818 net/i40e: not in enabled drivers build config 00:01:34.818 net/iavf: not in enabled drivers build config 00:01:34.818 net/ice: not in enabled drivers build config 00:01:34.818 net/idpf: not in enabled drivers build config 00:01:34.818 net/igc: not in enabled drivers build config 00:01:34.818 net/ionic: not in enabled drivers build config 00:01:34.818 net/ipn3ke: not in enabled drivers build config 00:01:34.818 net/ixgbe: not in enabled drivers build config 00:01:34.818 net/mana: not in enabled drivers build config 00:01:34.818 net/memif: not in enabled drivers build config 00:01:34.818 net/mlx4: not in enabled drivers build config 00:01:34.818 net/mlx5: not in enabled drivers build config 00:01:34.818 net/mvneta: not in enabled drivers build config 00:01:34.818 net/mvpp2: not in enabled drivers build config 00:01:34.818 net/netvsc: not in enabled drivers build config 00:01:34.818 net/nfb: not in enabled drivers build config 00:01:34.818 net/nfp: not in enabled drivers build config 00:01:34.818 net/ngbe: not in enabled drivers build config 00:01:34.818 net/null: not in enabled drivers build config 00:01:34.818 net/octeontx: not in enabled drivers build config 00:01:34.818 net/octeon_ep: not in enabled drivers build config 00:01:34.818 net/pcap: not in enabled drivers build config 00:01:34.818 net/pfe: not in enabled drivers build config 00:01:34.818 net/qede: not in enabled drivers build config 00:01:34.818 net/ring: not in enabled drivers build config 00:01:34.818 net/sfc: not in enabled drivers build config 00:01:34.818 net/softnic: not in enabled drivers build config 00:01:34.818 net/tap: not in enabled drivers build config 00:01:34.818 net/thunderx: not in enabled drivers build config 00:01:34.818 net/txgbe: not in enabled drivers build config 00:01:34.818 net/vdev_netvsc: not in enabled drivers build config 00:01:34.818 net/vhost: not in enabled drivers build config 00:01:34.818 net/virtio: not in enabled drivers build config 00:01:34.818 net/vmxnet3: not in enabled drivers build config 00:01:34.818 raw/*: missing internal dependency, "rawdev" 00:01:34.818 crypto/armv8: not in enabled drivers build config 00:01:34.818 crypto/bcmfs: not in enabled drivers build config 00:01:34.818 crypto/caam_jr: not in enabled drivers build config 00:01:34.818 crypto/ccp: not in enabled drivers build config 00:01:34.818 crypto/cnxk: not in enabled drivers build config 00:01:34.818 crypto/dpaa_sec: not in enabled drivers build config 00:01:34.818 crypto/dpaa2_sec: not in enabled drivers build config 00:01:34.818 crypto/ipsec_mb: not in enabled drivers build config 00:01:34.818 crypto/mlx5: not in enabled drivers build config 00:01:34.818 crypto/mvsam: not in enabled drivers build config 00:01:34.818 crypto/nitrox: not in enabled drivers build config 00:01:34.818 crypto/null: not in enabled drivers build config 00:01:34.818 crypto/octeontx: not in enabled drivers build config 00:01:34.818 crypto/openssl: not in enabled drivers build config 00:01:34.818 crypto/scheduler: not in enabled drivers build config 00:01:34.818 crypto/uadk: not in enabled drivers build config 00:01:34.818 crypto/virtio: not in enabled drivers build config 00:01:34.818 compress/isal: not in enabled drivers build config 00:01:34.818 compress/mlx5: not in enabled drivers build config 00:01:34.818 compress/nitrox: not in enabled drivers build config 00:01:34.818 compress/octeontx: not in enabled drivers build config 00:01:34.818 compress/zlib: not in enabled drivers build config 00:01:34.818 regex/*: missing internal dependency, "regexdev" 00:01:34.818 ml/*: missing internal dependency, "mldev" 00:01:34.818 vdpa/ifc: not in enabled drivers build config 00:01:34.818 vdpa/mlx5: not in enabled drivers build config 00:01:34.818 vdpa/nfp: not in enabled drivers build config 00:01:34.818 vdpa/sfc: not in enabled drivers build config 00:01:34.818 event/*: missing internal dependency, "eventdev" 00:01:34.818 baseband/*: missing internal dependency, "bbdev" 00:01:34.818 gpu/*: missing internal dependency, "gpudev" 00:01:34.818 00:01:34.818 00:01:34.818 Build targets in project: 85 00:01:34.818 00:01:34.818 DPDK 24.03.0 00:01:34.818 00:01:34.818 User defined options 00:01:34.818 buildtype : debug 00:01:34.818 default_library : shared 00:01:34.818 libdir : lib 00:01:34.818 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:34.818 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:34.818 c_link_args : 00:01:34.818 cpu_instruction_set: native 00:01:34.818 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:34.818 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:34.818 enable_docs : false 00:01:34.818 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:34.818 enable_kmods : false 00:01:34.818 max_lcores : 128 00:01:34.818 tests : false 00:01:34.818 00:01:34.818 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.076 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:35.347 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:35.347 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:35.347 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:35.347 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.347 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:35.347 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:35.347 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:35.347 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:35.347 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.347 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:35.347 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:35.347 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:35.347 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.347 [14/268] Linking static target lib/librte_kvargs.a 00:01:35.347 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:35.347 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:35.605 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:35.605 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:35.605 [19/268] Linking static target lib/librte_log.a 00:01:35.605 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.605 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:35.605 [22/268] Linking static target lib/librte_pci.a 00:01:35.605 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:35.605 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:35.605 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:35.605 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:35.867 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.867 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.867 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.867 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.867 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:35.867 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.867 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.867 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.867 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.867 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.867 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.867 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.867 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.867 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.867 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.867 [42/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.867 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.867 [44/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.867 [45/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.867 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.867 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.867 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:35.867 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:35.867 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.867 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.867 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:35.867 [53/268] Linking static target lib/librte_meter.a 00:01:35.867 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.867 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.867 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.867 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.867 [58/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:35.867 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.867 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.867 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:35.867 [62/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.867 [63/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.867 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.867 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:35.867 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.867 [67/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.867 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.867 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.867 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.867 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.867 [72/268] Linking static target lib/librte_ring.a 00:01:35.867 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.867 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.867 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.867 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.867 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.867 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.867 [79/268] Linking static target lib/librte_telemetry.a 00:01:35.867 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.867 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.867 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:35.867 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.867 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.867 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.867 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:35.867 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.867 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.867 [89/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:35.867 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.867 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:35.868 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.868 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.868 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.868 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.868 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.868 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.868 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.868 [99/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.127 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:36.127 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:36.127 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:36.127 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.127 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:36.127 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:36.127 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:36.127 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:36.127 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:36.127 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:36.127 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:36.127 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.127 [112/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.127 [113/268] Linking static target lib/librte_net.a 00:01:36.127 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:36.127 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:36.127 [116/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:36.127 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.127 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:36.127 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:36.127 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:36.127 [121/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.127 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:36.127 [123/268] Linking static target lib/librte_mempool.a 00:01:36.127 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:36.127 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:36.127 [126/268] Linking static target lib/librte_eal.a 00:01:36.127 [127/268] Linking static target lib/librte_rcu.a 00:01:36.127 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:36.127 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:36.127 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:36.127 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.127 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:36.127 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:36.127 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:36.127 [135/268] Linking static target lib/librte_cmdline.a 00:01:36.127 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.127 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:36.127 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.127 [139/268] Linking target lib/librte_log.so.24.1 00:01:36.127 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:36.127 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.385 [142/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.385 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.385 [144/268] Linking static target lib/librte_mbuf.a 00:01:36.385 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.385 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:36.385 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.385 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:36.385 [149/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:36.385 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.385 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:36.385 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:36.385 [153/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:36.385 [154/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:36.385 [155/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.385 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:36.385 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:36.385 [158/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:36.385 [159/268] Linking static target lib/librte_timer.a 00:01:36.385 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.385 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:36.385 [162/268] Linking static target lib/librte_reorder.a 00:01:36.385 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:36.385 [164/268] Linking static target lib/librte_dmadev.a 00:01:36.385 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:36.385 [166/268] Linking target lib/librte_telemetry.so.24.1 00:01:36.385 [167/268] Linking target lib/librte_kvargs.so.24.1 00:01:36.385 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:36.385 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:36.385 [170/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.385 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.385 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:36.385 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:36.385 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:36.385 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:36.385 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:36.385 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:36.385 [178/268] Linking static target lib/librte_power.a 00:01:36.385 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:36.385 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:36.385 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:36.385 [182/268] Linking static target lib/librte_security.a 00:01:36.385 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:36.385 [184/268] Linking static target lib/librte_compressdev.a 00:01:36.385 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:36.385 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:36.385 [187/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:36.643 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:36.643 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:36.643 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:36.643 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:36.643 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:36.643 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:36.643 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:36.643 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:36.643 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.643 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:36.643 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.643 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:36.643 [200/268] Linking static target lib/librte_hash.a 00:01:36.643 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.643 [202/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.643 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:36.643 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:36.643 [205/268] Linking static target lib/librte_cryptodev.a 00:01:36.643 [206/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.643 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.643 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.902 [209/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.902 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:36.902 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:36.902 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.902 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.902 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.902 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:36.902 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.902 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.902 [218/268] Linking static target lib/librte_ethdev.a 00:01:36.902 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.902 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.902 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.160 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.160 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:37.160 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.160 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.419 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.419 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.353 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:38.353 [229/268] Linking static target lib/librte_vhost.a 00:01:38.610 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.020 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.283 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.541 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.800 [234/268] Linking target lib/librte_eal.so.24.1 00:01:45.800 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:45.800 [236/268] Linking target lib/librte_ring.so.24.1 00:01:45.800 [237/268] Linking target lib/librte_pci.so.24.1 00:01:45.800 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:45.800 [239/268] Linking target lib/librte_timer.so.24.1 00:01:45.800 [240/268] Linking target lib/librte_meter.so.24.1 00:01:45.800 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:46.059 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:46.059 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:46.059 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:46.059 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:46.059 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:46.059 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:46.059 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:46.059 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:46.059 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:46.059 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:46.059 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:46.059 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:46.319 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:46.319 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:46.319 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:46.319 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:46.319 [258/268] Linking target lib/librte_net.so.24.1 00:01:46.577 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:46.577 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:46.578 [261/268] Linking target lib/librte_security.so.24.1 00:01:46.578 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:46.578 [263/268] Linking target lib/librte_hash.so.24.1 00:01:46.578 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:46.578 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:46.578 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:46.578 [267/268] Linking target lib/librte_power.so.24.1 00:01:46.836 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:46.836 INFO: autodetecting backend as ninja 00:01:46.836 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:47.775 CC lib/ut_mock/mock.o 00:01:47.775 CC lib/log/log.o 00:01:47.775 CC lib/log/log_flags.o 00:01:47.775 CC lib/log/log_deprecated.o 00:01:47.775 CC lib/ut/ut.o 00:01:47.775 LIB libspdk_log.a 00:01:47.775 LIB libspdk_ut_mock.a 00:01:47.775 LIB libspdk_ut.a 00:01:47.775 SO libspdk_log.so.7.0 00:01:47.775 SO libspdk_ut_mock.so.6.0 00:01:47.775 SO libspdk_ut.so.2.0 00:01:47.775 SYMLINK libspdk_log.so 00:01:47.775 SYMLINK libspdk_ut_mock.so 00:01:48.034 SYMLINK libspdk_ut.so 00:01:48.034 CC lib/util/base64.o 00:01:48.034 CC lib/util/cpuset.o 00:01:48.034 CC lib/util/bit_array.o 00:01:48.034 CC lib/util/crc16.o 00:01:48.034 CC lib/util/crc32.o 00:01:48.034 CC lib/util/crc32c.o 00:01:48.034 CC lib/util/crc32_ieee.o 00:01:48.034 CC lib/util/crc64.o 00:01:48.034 CC lib/util/dif.o 00:01:48.034 CC lib/util/fd.o 00:01:48.034 CC lib/util/file.o 00:01:48.034 CC lib/util/hexlify.o 00:01:48.034 CC lib/util/iov.o 00:01:48.034 CC lib/util/math.o 00:01:48.034 CC lib/util/pipe.o 00:01:48.034 CC lib/util/strerror_tls.o 00:01:48.034 CC lib/util/string.o 00:01:48.034 CC lib/util/uuid.o 00:01:48.034 CC lib/util/fd_group.o 00:01:48.034 CC lib/util/xor.o 00:01:48.034 CC lib/util/zipf.o 00:01:48.293 CC lib/dma/dma.o 00:01:48.293 CXX lib/trace_parser/trace.o 00:01:48.293 CC lib/ioat/ioat.o 00:01:48.293 CC lib/vfio_user/host/vfio_user_pci.o 00:01:48.293 CC lib/vfio_user/host/vfio_user.o 00:01:48.293 LIB libspdk_dma.a 00:01:48.293 SO libspdk_dma.so.4.0 00:01:48.551 SYMLINK libspdk_dma.so 00:01:48.551 LIB libspdk_ioat.a 00:01:48.551 SO libspdk_ioat.so.7.0 00:01:48.551 SYMLINK libspdk_ioat.so 00:01:48.551 LIB libspdk_vfio_user.a 00:01:48.551 LIB libspdk_util.a 00:01:48.551 SO libspdk_vfio_user.so.5.0 00:01:48.551 SO libspdk_util.so.9.1 00:01:48.551 SYMLINK libspdk_vfio_user.so 00:01:48.809 SYMLINK libspdk_util.so 00:01:48.809 LIB libspdk_trace_parser.a 00:01:48.809 SO libspdk_trace_parser.so.5.0 00:01:49.067 SYMLINK libspdk_trace_parser.so 00:01:49.067 CC lib/rdma_utils/rdma_utils.o 00:01:49.067 CC lib/conf/conf.o 00:01:49.067 CC lib/rdma_provider/common.o 00:01:49.067 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:49.067 CC lib/json/json_parse.o 00:01:49.067 CC lib/env_dpdk/env.o 00:01:49.067 CC lib/json/json_util.o 00:01:49.067 CC lib/env_dpdk/memory.o 00:01:49.067 CC lib/json/json_write.o 00:01:49.067 CC lib/env_dpdk/pci.o 00:01:49.067 CC lib/env_dpdk/init.o 00:01:49.067 CC lib/env_dpdk/threads.o 00:01:49.067 CC lib/env_dpdk/pci_ioat.o 00:01:49.067 CC lib/env_dpdk/pci_virtio.o 00:01:49.067 CC lib/env_dpdk/pci_vmd.o 00:01:49.067 CC lib/env_dpdk/pci_idxd.o 00:01:49.067 CC lib/env_dpdk/pci_event.o 00:01:49.067 CC lib/vmd/vmd.o 00:01:49.067 CC lib/env_dpdk/sigbus_handler.o 00:01:49.067 CC lib/vmd/led.o 00:01:49.067 CC lib/env_dpdk/pci_dpdk.o 00:01:49.067 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:49.067 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:49.067 CC lib/idxd/idxd.o 00:01:49.067 CC lib/idxd/idxd_user.o 00:01:49.067 CC lib/idxd/idxd_kernel.o 00:01:49.325 LIB libspdk_rdma_provider.a 00:01:49.325 LIB libspdk_conf.a 00:01:49.325 LIB libspdk_rdma_utils.a 00:01:49.325 SO libspdk_conf.so.6.0 00:01:49.325 SO libspdk_rdma_provider.so.6.0 00:01:49.325 SO libspdk_rdma_utils.so.1.0 00:01:49.325 LIB libspdk_json.a 00:01:49.325 SYMLINK libspdk_conf.so 00:01:49.325 SYMLINK libspdk_rdma_utils.so 00:01:49.325 SO libspdk_json.so.6.0 00:01:49.325 SYMLINK libspdk_rdma_provider.so 00:01:49.325 SYMLINK libspdk_json.so 00:01:49.583 LIB libspdk_idxd.a 00:01:49.583 SO libspdk_idxd.so.12.0 00:01:49.583 LIB libspdk_vmd.a 00:01:49.583 SO libspdk_vmd.so.6.0 00:01:49.583 SYMLINK libspdk_idxd.so 00:01:49.583 SYMLINK libspdk_vmd.so 00:01:49.583 CC lib/jsonrpc/jsonrpc_server.o 00:01:49.583 CC lib/jsonrpc/jsonrpc_client.o 00:01:49.583 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:49.583 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:49.842 LIB libspdk_jsonrpc.a 00:01:49.842 SO libspdk_jsonrpc.so.6.0 00:01:50.101 SYMLINK libspdk_jsonrpc.so 00:01:50.101 LIB libspdk_env_dpdk.a 00:01:50.101 SO libspdk_env_dpdk.so.14.1 00:01:50.360 CC lib/rpc/rpc.o 00:01:50.360 SYMLINK libspdk_env_dpdk.so 00:01:50.360 LIB libspdk_rpc.a 00:01:50.360 SO libspdk_rpc.so.6.0 00:01:50.618 SYMLINK libspdk_rpc.so 00:01:50.883 CC lib/keyring/keyring.o 00:01:50.883 CC lib/keyring/keyring_rpc.o 00:01:50.883 CC lib/trace/trace.o 00:01:50.883 CC lib/trace/trace_flags.o 00:01:50.883 CC lib/trace/trace_rpc.o 00:01:50.883 CC lib/notify/notify.o 00:01:50.883 CC lib/notify/notify_rpc.o 00:01:50.883 LIB libspdk_notify.a 00:01:50.883 SO libspdk_notify.so.6.0 00:01:50.883 LIB libspdk_keyring.a 00:01:51.144 LIB libspdk_trace.a 00:01:51.144 SO libspdk_keyring.so.1.0 00:01:51.144 SYMLINK libspdk_notify.so 00:01:51.144 SO libspdk_trace.so.10.0 00:01:51.144 SYMLINK libspdk_keyring.so 00:01:51.144 SYMLINK libspdk_trace.so 00:01:51.402 CC lib/thread/thread.o 00:01:51.402 CC lib/thread/iobuf.o 00:01:51.402 CC lib/sock/sock.o 00:01:51.402 CC lib/sock/sock_rpc.o 00:01:51.661 LIB libspdk_sock.a 00:01:51.661 SO libspdk_sock.so.10.0 00:01:51.919 SYMLINK libspdk_sock.so 00:01:52.178 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:52.178 CC lib/nvme/nvme_ctrlr.o 00:01:52.178 CC lib/nvme/nvme_ns_cmd.o 00:01:52.178 CC lib/nvme/nvme_fabric.o 00:01:52.178 CC lib/nvme/nvme_ns.o 00:01:52.178 CC lib/nvme/nvme_pcie_common.o 00:01:52.178 CC lib/nvme/nvme_qpair.o 00:01:52.178 CC lib/nvme/nvme_pcie.o 00:01:52.178 CC lib/nvme/nvme_quirks.o 00:01:52.178 CC lib/nvme/nvme.o 00:01:52.178 CC lib/nvme/nvme_transport.o 00:01:52.178 CC lib/nvme/nvme_discovery.o 00:01:52.178 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:52.178 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:52.178 CC lib/nvme/nvme_tcp.o 00:01:52.178 CC lib/nvme/nvme_io_msg.o 00:01:52.178 CC lib/nvme/nvme_opal.o 00:01:52.178 CC lib/nvme/nvme_poll_group.o 00:01:52.178 CC lib/nvme/nvme_zns.o 00:01:52.178 CC lib/nvme/nvme_stubs.o 00:01:52.178 CC lib/nvme/nvme_auth.o 00:01:52.178 CC lib/nvme/nvme_cuse.o 00:01:52.178 CC lib/nvme/nvme_vfio_user.o 00:01:52.178 CC lib/nvme/nvme_rdma.o 00:01:52.436 LIB libspdk_thread.a 00:01:52.436 SO libspdk_thread.so.10.1 00:01:52.436 SYMLINK libspdk_thread.so 00:01:52.694 CC lib/virtio/virtio.o 00:01:52.694 CC lib/accel/accel.o 00:01:52.694 CC lib/accel/accel_rpc.o 00:01:52.694 CC lib/virtio/virtio_vhost_user.o 00:01:52.694 CC lib/virtio/virtio_vfio_user.o 00:01:52.694 CC lib/virtio/virtio_pci.o 00:01:52.694 CC lib/accel/accel_sw.o 00:01:52.694 CC lib/init/subsystem.o 00:01:52.694 CC lib/init/json_config.o 00:01:52.694 CC lib/blob/blobstore.o 00:01:52.694 CC lib/init/subsystem_rpc.o 00:01:52.694 CC lib/init/rpc.o 00:01:52.694 CC lib/blob/request.o 00:01:52.694 CC lib/blob/zeroes.o 00:01:52.694 CC lib/blob/blob_bs_dev.o 00:01:52.694 CC lib/vfu_tgt/tgt_endpoint.o 00:01:52.694 CC lib/vfu_tgt/tgt_rpc.o 00:01:52.953 LIB libspdk_init.a 00:01:52.953 SO libspdk_init.so.5.0 00:01:53.211 LIB libspdk_virtio.a 00:01:53.211 LIB libspdk_vfu_tgt.a 00:01:53.211 SO libspdk_vfu_tgt.so.3.0 00:01:53.211 SO libspdk_virtio.so.7.0 00:01:53.211 SYMLINK libspdk_init.so 00:01:53.211 SYMLINK libspdk_vfu_tgt.so 00:01:53.211 SYMLINK libspdk_virtio.so 00:01:53.469 CC lib/event/app.o 00:01:53.469 CC lib/event/reactor.o 00:01:53.469 CC lib/event/app_rpc.o 00:01:53.469 CC lib/event/log_rpc.o 00:01:53.469 CC lib/event/scheduler_static.o 00:01:53.469 LIB libspdk_accel.a 00:01:53.469 SO libspdk_accel.so.15.1 00:01:53.727 SYMLINK libspdk_accel.so 00:01:53.727 LIB libspdk_nvme.a 00:01:53.727 LIB libspdk_event.a 00:01:53.727 SO libspdk_event.so.14.0 00:01:53.727 SO libspdk_nvme.so.13.1 00:01:53.984 SYMLINK libspdk_event.so 00:01:53.984 CC lib/bdev/bdev.o 00:01:53.984 CC lib/bdev/bdev_rpc.o 00:01:53.984 CC lib/bdev/bdev_zone.o 00:01:53.984 CC lib/bdev/scsi_nvme.o 00:01:53.984 CC lib/bdev/part.o 00:01:53.984 SYMLINK libspdk_nvme.so 00:01:54.918 LIB libspdk_blob.a 00:01:54.918 SO libspdk_blob.so.11.0 00:01:54.918 SYMLINK libspdk_blob.so 00:01:55.175 CC lib/blobfs/blobfs.o 00:01:55.175 CC lib/blobfs/tree.o 00:01:55.433 CC lib/lvol/lvol.o 00:01:55.690 LIB libspdk_bdev.a 00:01:55.690 SO libspdk_bdev.so.15.1 00:01:55.948 SYMLINK libspdk_bdev.so 00:01:55.948 LIB libspdk_blobfs.a 00:01:55.948 SO libspdk_blobfs.so.10.0 00:01:55.948 SYMLINK libspdk_blobfs.so 00:01:55.948 LIB libspdk_lvol.a 00:01:55.948 SO libspdk_lvol.so.10.0 00:01:55.948 SYMLINK libspdk_lvol.so 00:01:56.205 CC lib/nvmf/ctrlr.o 00:01:56.205 CC lib/nvmf/ctrlr_discovery.o 00:01:56.205 CC lib/nvmf/ctrlr_bdev.o 00:01:56.205 CC lib/nvmf/subsystem.o 00:01:56.205 CC lib/nvmf/transport.o 00:01:56.205 CC lib/nbd/nbd.o 00:01:56.205 CC lib/nvmf/nvmf.o 00:01:56.205 CC lib/nvmf/nvmf_rpc.o 00:01:56.205 CC lib/nvmf/tcp.o 00:01:56.205 CC lib/nbd/nbd_rpc.o 00:01:56.205 CC lib/nvmf/mdns_server.o 00:01:56.205 CC lib/nvmf/stubs.o 00:01:56.205 CC lib/nvmf/vfio_user.o 00:01:56.205 CC lib/nvmf/rdma.o 00:01:56.205 CC lib/nvmf/auth.o 00:01:56.205 CC lib/ftl/ftl_core.o 00:01:56.205 CC lib/ftl/ftl_init.o 00:01:56.205 CC lib/ftl/ftl_debug.o 00:01:56.205 CC lib/ftl/ftl_layout.o 00:01:56.205 CC lib/ftl/ftl_sb.o 00:01:56.205 CC lib/ftl/ftl_io.o 00:01:56.205 CC lib/ftl/ftl_l2p_flat.o 00:01:56.205 CC lib/ftl/ftl_nv_cache.o 00:01:56.205 CC lib/ftl/ftl_l2p.o 00:01:56.205 CC lib/ublk/ublk.o 00:01:56.205 CC lib/ftl/ftl_band.o 00:01:56.205 CC lib/ftl/ftl_band_ops.o 00:01:56.205 CC lib/ublk/ublk_rpc.o 00:01:56.205 CC lib/ftl/ftl_writer.o 00:01:56.205 CC lib/ftl/ftl_rq.o 00:01:56.205 CC lib/ftl/ftl_reloc.o 00:01:56.205 CC lib/scsi/dev.o 00:01:56.205 CC lib/ftl/ftl_l2p_cache.o 00:01:56.205 CC lib/scsi/lun.o 00:01:56.205 CC lib/ftl/ftl_p2l.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt.o 00:01:56.205 CC lib/scsi/port.o 00:01:56.205 CC lib/scsi/scsi.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:56.205 CC lib/scsi/scsi_bdev.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:56.205 CC lib/scsi/scsi_pr.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:56.205 CC lib/scsi/task.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:56.205 CC lib/scsi/scsi_rpc.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:56.205 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:56.205 CC lib/ftl/utils/ftl_conf.o 00:01:56.205 CC lib/ftl/utils/ftl_mempool.o 00:01:56.205 CC lib/ftl/utils/ftl_md.o 00:01:56.205 CC lib/ftl/utils/ftl_property.o 00:01:56.205 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:56.205 CC lib/ftl/utils/ftl_bitmap.o 00:01:56.205 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:56.205 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:56.205 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:56.205 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:56.205 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:56.205 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:56.205 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:56.205 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:56.205 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:56.205 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:56.205 CC lib/ftl/base/ftl_base_dev.o 00:01:56.205 CC lib/ftl/ftl_trace.o 00:01:56.205 CC lib/ftl/base/ftl_base_bdev.o 00:01:56.772 LIB libspdk_nbd.a 00:01:56.772 SO libspdk_nbd.so.7.0 00:01:56.772 LIB libspdk_scsi.a 00:01:56.772 SYMLINK libspdk_nbd.so 00:01:56.772 SO libspdk_scsi.so.9.0 00:01:56.772 LIB libspdk_ublk.a 00:01:57.031 SYMLINK libspdk_scsi.so 00:01:57.031 SO libspdk_ublk.so.3.0 00:01:57.031 SYMLINK libspdk_ublk.so 00:01:57.290 LIB libspdk_ftl.a 00:01:57.290 CC lib/vhost/vhost.o 00:01:57.290 CC lib/iscsi/conn.o 00:01:57.290 CC lib/vhost/vhost_rpc.o 00:01:57.290 CC lib/iscsi/init_grp.o 00:01:57.290 CC lib/vhost/vhost_scsi.o 00:01:57.290 CC lib/iscsi/md5.o 00:01:57.290 CC lib/iscsi/iscsi.o 00:01:57.290 CC lib/vhost/vhost_blk.o 00:01:57.290 CC lib/vhost/rte_vhost_user.o 00:01:57.290 CC lib/iscsi/param.o 00:01:57.290 CC lib/iscsi/portal_grp.o 00:01:57.290 CC lib/iscsi/iscsi_subsystem.o 00:01:57.290 CC lib/iscsi/tgt_node.o 00:01:57.290 CC lib/iscsi/iscsi_rpc.o 00:01:57.290 CC lib/iscsi/task.o 00:01:57.290 SO libspdk_ftl.so.9.0 00:01:57.548 SYMLINK libspdk_ftl.so 00:01:57.807 LIB libspdk_nvmf.a 00:01:57.807 SO libspdk_nvmf.so.19.0 00:01:58.066 LIB libspdk_vhost.a 00:01:58.066 SYMLINK libspdk_nvmf.so 00:01:58.066 SO libspdk_vhost.so.8.0 00:01:58.066 SYMLINK libspdk_vhost.so 00:01:58.066 LIB libspdk_iscsi.a 00:01:58.337 SO libspdk_iscsi.so.8.0 00:01:58.337 SYMLINK libspdk_iscsi.so 00:01:58.991 CC module/env_dpdk/env_dpdk_rpc.o 00:01:58.991 CC module/vfu_device/vfu_virtio.o 00:01:58.992 CC module/vfu_device/vfu_virtio_blk.o 00:01:58.992 CC module/vfu_device/vfu_virtio_scsi.o 00:01:58.992 CC module/vfu_device/vfu_virtio_rpc.o 00:01:58.992 CC module/accel/dsa/accel_dsa.o 00:01:58.992 CC module/accel/dsa/accel_dsa_rpc.o 00:01:58.992 CC module/accel/error/accel_error_rpc.o 00:01:58.992 CC module/accel/error/accel_error.o 00:01:58.992 CC module/blob/bdev/blob_bdev.o 00:01:58.992 LIB libspdk_env_dpdk_rpc.a 00:01:58.992 CC module/scheduler/gscheduler/gscheduler.o 00:01:58.992 CC module/accel/ioat/accel_ioat.o 00:01:58.992 CC module/accel/iaa/accel_iaa_rpc.o 00:01:58.992 CC module/accel/ioat/accel_ioat_rpc.o 00:01:58.992 CC module/accel/iaa/accel_iaa.o 00:01:58.992 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:58.992 CC module/keyring/linux/keyring.o 00:01:58.992 CC module/keyring/linux/keyring_rpc.o 00:01:58.992 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:58.992 CC module/keyring/file/keyring.o 00:01:58.992 CC module/keyring/file/keyring_rpc.o 00:01:58.992 CC module/sock/posix/posix.o 00:01:58.992 SO libspdk_env_dpdk_rpc.so.6.0 00:01:58.992 SYMLINK libspdk_env_dpdk_rpc.so 00:01:58.992 LIB libspdk_scheduler_gscheduler.a 00:01:59.250 LIB libspdk_keyring_linux.a 00:01:59.250 LIB libspdk_keyring_file.a 00:01:59.250 LIB libspdk_scheduler_dpdk_governor.a 00:01:59.250 SO libspdk_scheduler_gscheduler.so.4.0 00:01:59.250 LIB libspdk_scheduler_dynamic.a 00:01:59.250 LIB libspdk_accel_error.a 00:01:59.250 LIB libspdk_accel_ioat.a 00:01:59.250 SO libspdk_keyring_linux.so.1.0 00:01:59.251 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:59.251 SO libspdk_keyring_file.so.1.0 00:01:59.251 SO libspdk_scheduler_dynamic.so.4.0 00:01:59.251 LIB libspdk_accel_iaa.a 00:01:59.251 LIB libspdk_accel_dsa.a 00:01:59.251 SYMLINK libspdk_scheduler_gscheduler.so 00:01:59.251 SO libspdk_accel_error.so.2.0 00:01:59.251 SO libspdk_accel_ioat.so.6.0 00:01:59.251 LIB libspdk_blob_bdev.a 00:01:59.251 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:59.251 SO libspdk_accel_dsa.so.5.0 00:01:59.251 SYMLINK libspdk_keyring_file.so 00:01:59.251 SO libspdk_accel_iaa.so.3.0 00:01:59.251 SYMLINK libspdk_keyring_linux.so 00:01:59.251 SYMLINK libspdk_scheduler_dynamic.so 00:01:59.251 SO libspdk_blob_bdev.so.11.0 00:01:59.251 SYMLINK libspdk_accel_ioat.so 00:01:59.251 SYMLINK libspdk_accel_error.so 00:01:59.251 SYMLINK libspdk_accel_iaa.so 00:01:59.251 SYMLINK libspdk_accel_dsa.so 00:01:59.251 SYMLINK libspdk_blob_bdev.so 00:01:59.251 LIB libspdk_vfu_device.a 00:01:59.509 SO libspdk_vfu_device.so.3.0 00:01:59.509 SYMLINK libspdk_vfu_device.so 00:01:59.509 LIB libspdk_sock_posix.a 00:01:59.509 SO libspdk_sock_posix.so.6.0 00:01:59.767 SYMLINK libspdk_sock_posix.so 00:01:59.767 CC module/bdev/error/vbdev_error.o 00:01:59.767 CC module/bdev/error/vbdev_error_rpc.o 00:01:59.767 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:59.767 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:59.767 CC module/bdev/gpt/gpt.o 00:01:59.767 CC module/bdev/gpt/vbdev_gpt.o 00:01:59.767 CC module/bdev/passthru/vbdev_passthru.o 00:01:59.767 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:59.768 CC module/bdev/delay/vbdev_delay.o 00:01:59.768 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:59.768 CC module/blobfs/bdev/blobfs_bdev.o 00:01:59.768 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:59.768 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:59.768 CC module/bdev/nvme/bdev_nvme.o 00:01:59.768 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:59.768 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:59.768 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:59.768 CC module/bdev/nvme/nvme_rpc.o 00:01:59.768 CC module/bdev/nvme/bdev_mdns_client.o 00:01:59.768 CC module/bdev/nvme/vbdev_opal.o 00:01:59.768 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:59.768 CC module/bdev/lvol/vbdev_lvol.o 00:01:59.768 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:59.768 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:59.768 CC module/bdev/null/bdev_null_rpc.o 00:01:59.768 CC module/bdev/null/bdev_null.o 00:01:59.768 CC module/bdev/aio/bdev_aio.o 00:01:59.768 CC module/bdev/split/vbdev_split.o 00:01:59.768 CC module/bdev/aio/bdev_aio_rpc.o 00:01:59.768 CC module/bdev/split/vbdev_split_rpc.o 00:01:59.768 CC module/bdev/malloc/bdev_malloc.o 00:01:59.768 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:59.768 CC module/bdev/iscsi/bdev_iscsi.o 00:01:59.768 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:59.768 CC module/bdev/raid/bdev_raid.o 00:01:59.768 CC module/bdev/raid/bdev_raid_sb.o 00:01:59.768 CC module/bdev/raid/bdev_raid_rpc.o 00:01:59.768 CC module/bdev/raid/raid0.o 00:01:59.768 CC module/bdev/raid/raid1.o 00:01:59.768 CC module/bdev/raid/concat.o 00:01:59.768 CC module/bdev/ftl/bdev_ftl.o 00:01:59.768 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:00.026 LIB libspdk_blobfs_bdev.a 00:02:00.026 LIB libspdk_bdev_gpt.a 00:02:00.026 LIB libspdk_bdev_null.a 00:02:00.026 LIB libspdk_bdev_split.a 00:02:00.026 LIB libspdk_bdev_error.a 00:02:00.026 SO libspdk_blobfs_bdev.so.6.0 00:02:00.026 SO libspdk_bdev_null.so.6.0 00:02:00.026 SO libspdk_bdev_gpt.so.6.0 00:02:00.026 LIB libspdk_bdev_zone_block.a 00:02:00.026 SO libspdk_bdev_error.so.6.0 00:02:00.026 LIB libspdk_bdev_passthru.a 00:02:00.026 SO libspdk_bdev_split.so.6.0 00:02:00.026 LIB libspdk_bdev_ftl.a 00:02:00.026 SYMLINK libspdk_blobfs_bdev.so 00:02:00.026 SO libspdk_bdev_passthru.so.6.0 00:02:00.026 SO libspdk_bdev_zone_block.so.6.0 00:02:00.026 SO libspdk_bdev_ftl.so.6.0 00:02:00.026 SYMLINK libspdk_bdev_null.so 00:02:00.026 SYMLINK libspdk_bdev_split.so 00:02:00.026 SYMLINK libspdk_bdev_gpt.so 00:02:00.026 LIB libspdk_bdev_aio.a 00:02:00.026 SYMLINK libspdk_bdev_error.so 00:02:00.027 LIB libspdk_bdev_iscsi.a 00:02:00.027 LIB libspdk_bdev_delay.a 00:02:00.027 SYMLINK libspdk_bdev_passthru.so 00:02:00.027 LIB libspdk_bdev_malloc.a 00:02:00.027 SO libspdk_bdev_iscsi.so.6.0 00:02:00.027 SO libspdk_bdev_delay.so.6.0 00:02:00.027 SYMLINK libspdk_bdev_ftl.so 00:02:00.027 SO libspdk_bdev_aio.so.6.0 00:02:00.027 SYMLINK libspdk_bdev_zone_block.so 00:02:00.285 SO libspdk_bdev_malloc.so.6.0 00:02:00.285 SYMLINK libspdk_bdev_delay.so 00:02:00.285 LIB libspdk_bdev_virtio.a 00:02:00.285 SYMLINK libspdk_bdev_iscsi.so 00:02:00.285 SYMLINK libspdk_bdev_aio.so 00:02:00.285 SYMLINK libspdk_bdev_malloc.so 00:02:00.285 LIB libspdk_bdev_lvol.a 00:02:00.285 SO libspdk_bdev_virtio.so.6.0 00:02:00.285 SO libspdk_bdev_lvol.so.6.0 00:02:00.285 SYMLINK libspdk_bdev_virtio.so 00:02:00.285 SYMLINK libspdk_bdev_lvol.so 00:02:00.545 LIB libspdk_bdev_raid.a 00:02:00.545 SO libspdk_bdev_raid.so.6.0 00:02:00.545 SYMLINK libspdk_bdev_raid.so 00:02:01.482 LIB libspdk_bdev_nvme.a 00:02:01.482 SO libspdk_bdev_nvme.so.7.0 00:02:01.482 SYMLINK libspdk_bdev_nvme.so 00:02:02.051 CC module/event/subsystems/sock/sock.o 00:02:02.051 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:02.051 CC module/event/subsystems/vmd/vmd.o 00:02:02.051 CC module/event/subsystems/iobuf/iobuf.o 00:02:02.051 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:02.051 CC module/event/subsystems/scheduler/scheduler.o 00:02:02.051 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:02.051 CC module/event/subsystems/keyring/keyring.o 00:02:02.051 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:02.312 LIB libspdk_event_sock.a 00:02:02.312 LIB libspdk_event_vmd.a 00:02:02.312 LIB libspdk_event_keyring.a 00:02:02.312 LIB libspdk_event_vhost_blk.a 00:02:02.312 LIB libspdk_event_scheduler.a 00:02:02.312 LIB libspdk_event_iobuf.a 00:02:02.312 SO libspdk_event_sock.so.5.0 00:02:02.312 SO libspdk_event_vmd.so.6.0 00:02:02.312 SO libspdk_event_vhost_blk.so.3.0 00:02:02.312 SO libspdk_event_keyring.so.1.0 00:02:02.312 LIB libspdk_event_vfu_tgt.a 00:02:02.312 SO libspdk_event_scheduler.so.4.0 00:02:02.312 SO libspdk_event_iobuf.so.3.0 00:02:02.312 SYMLINK libspdk_event_sock.so 00:02:02.312 SO libspdk_event_vfu_tgt.so.3.0 00:02:02.312 SYMLINK libspdk_event_vhost_blk.so 00:02:02.312 SYMLINK libspdk_event_keyring.so 00:02:02.312 SYMLINK libspdk_event_vmd.so 00:02:02.312 SYMLINK libspdk_event_scheduler.so 00:02:02.312 SYMLINK libspdk_event_iobuf.so 00:02:02.312 SYMLINK libspdk_event_vfu_tgt.so 00:02:02.571 CC module/event/subsystems/accel/accel.o 00:02:02.831 LIB libspdk_event_accel.a 00:02:02.831 SO libspdk_event_accel.so.6.0 00:02:02.831 SYMLINK libspdk_event_accel.so 00:02:03.090 CC module/event/subsystems/bdev/bdev.o 00:02:03.349 LIB libspdk_event_bdev.a 00:02:03.349 SO libspdk_event_bdev.so.6.0 00:02:03.349 SYMLINK libspdk_event_bdev.so 00:02:03.607 CC module/event/subsystems/nbd/nbd.o 00:02:03.607 CC module/event/subsystems/scsi/scsi.o 00:02:03.607 CC module/event/subsystems/ublk/ublk.o 00:02:03.607 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:03.607 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:03.865 LIB libspdk_event_nbd.a 00:02:03.865 SO libspdk_event_nbd.so.6.0 00:02:03.865 LIB libspdk_event_scsi.a 00:02:03.865 LIB libspdk_event_ublk.a 00:02:03.865 SO libspdk_event_scsi.so.6.0 00:02:03.865 SYMLINK libspdk_event_nbd.so 00:02:03.865 SO libspdk_event_ublk.so.3.0 00:02:03.865 LIB libspdk_event_nvmf.a 00:02:03.865 SYMLINK libspdk_event_scsi.so 00:02:03.865 SO libspdk_event_nvmf.so.6.0 00:02:03.865 SYMLINK libspdk_event_ublk.so 00:02:04.124 SYMLINK libspdk_event_nvmf.so 00:02:04.124 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:04.124 CC module/event/subsystems/iscsi/iscsi.o 00:02:04.383 LIB libspdk_event_vhost_scsi.a 00:02:04.383 LIB libspdk_event_iscsi.a 00:02:04.383 SO libspdk_event_vhost_scsi.so.3.0 00:02:04.383 SO libspdk_event_iscsi.so.6.0 00:02:04.383 SYMLINK libspdk_event_vhost_scsi.so 00:02:04.383 SYMLINK libspdk_event_iscsi.so 00:02:04.644 SO libspdk.so.6.0 00:02:04.644 SYMLINK libspdk.so 00:02:04.903 CC app/spdk_nvme_discover/discovery_aer.o 00:02:04.903 CC app/spdk_lspci/spdk_lspci.o 00:02:04.903 CC app/spdk_nvme_identify/identify.o 00:02:04.903 CC app/spdk_nvme_perf/perf.o 00:02:04.903 CXX app/trace/trace.o 00:02:04.903 CC app/trace_record/trace_record.o 00:02:04.903 CC app/spdk_top/spdk_top.o 00:02:04.903 CC test/rpc_client/rpc_client_test.o 00:02:04.903 TEST_HEADER include/spdk/accel.h 00:02:04.903 TEST_HEADER include/spdk/accel_module.h 00:02:04.903 TEST_HEADER include/spdk/barrier.h 00:02:04.903 TEST_HEADER include/spdk/assert.h 00:02:04.903 TEST_HEADER include/spdk/base64.h 00:02:04.903 TEST_HEADER include/spdk/bdev.h 00:02:04.903 TEST_HEADER include/spdk/bdev_module.h 00:02:04.903 TEST_HEADER include/spdk/bdev_zone.h 00:02:04.903 TEST_HEADER include/spdk/bit_pool.h 00:02:04.903 TEST_HEADER include/spdk/bit_array.h 00:02:04.903 TEST_HEADER include/spdk/blob_bdev.h 00:02:04.903 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:04.903 TEST_HEADER include/spdk/blobfs.h 00:02:04.903 TEST_HEADER include/spdk/conf.h 00:02:04.903 TEST_HEADER include/spdk/blob.h 00:02:04.903 TEST_HEADER include/spdk/config.h 00:02:04.903 TEST_HEADER include/spdk/cpuset.h 00:02:04.903 TEST_HEADER include/spdk/crc32.h 00:02:04.903 TEST_HEADER include/spdk/crc64.h 00:02:04.903 TEST_HEADER include/spdk/crc16.h 00:02:04.903 TEST_HEADER include/spdk/dif.h 00:02:04.903 TEST_HEADER include/spdk/endian.h 00:02:04.903 TEST_HEADER include/spdk/dma.h 00:02:04.903 TEST_HEADER include/spdk/env_dpdk.h 00:02:04.903 TEST_HEADER include/spdk/env.h 00:02:04.903 TEST_HEADER include/spdk/event.h 00:02:04.903 TEST_HEADER include/spdk/fd_group.h 00:02:04.903 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:04.903 CC app/iscsi_tgt/iscsi_tgt.o 00:02:04.903 TEST_HEADER include/spdk/file.h 00:02:04.903 TEST_HEADER include/spdk/fd.h 00:02:04.903 TEST_HEADER include/spdk/ftl.h 00:02:04.903 TEST_HEADER include/spdk/gpt_spec.h 00:02:04.903 TEST_HEADER include/spdk/hexlify.h 00:02:04.903 TEST_HEADER include/spdk/idxd.h 00:02:04.903 TEST_HEADER include/spdk/histogram_data.h 00:02:04.903 TEST_HEADER include/spdk/idxd_spec.h 00:02:04.903 TEST_HEADER include/spdk/init.h 00:02:04.903 TEST_HEADER include/spdk/ioat_spec.h 00:02:04.903 TEST_HEADER include/spdk/iscsi_spec.h 00:02:04.903 TEST_HEADER include/spdk/ioat.h 00:02:04.903 TEST_HEADER include/spdk/json.h 00:02:04.903 CC app/nvmf_tgt/nvmf_main.o 00:02:04.903 TEST_HEADER include/spdk/jsonrpc.h 00:02:04.903 TEST_HEADER include/spdk/keyring_module.h 00:02:04.903 TEST_HEADER include/spdk/keyring.h 00:02:04.903 TEST_HEADER include/spdk/likely.h 00:02:04.903 TEST_HEADER include/spdk/log.h 00:02:04.903 TEST_HEADER include/spdk/lvol.h 00:02:04.903 TEST_HEADER include/spdk/memory.h 00:02:04.903 TEST_HEADER include/spdk/nbd.h 00:02:04.903 CC app/spdk_dd/spdk_dd.o 00:02:04.903 TEST_HEADER include/spdk/mmio.h 00:02:04.903 TEST_HEADER include/spdk/notify.h 00:02:04.903 TEST_HEADER include/spdk/nvme.h 00:02:04.903 TEST_HEADER include/spdk/nvme_intel.h 00:02:04.903 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:04.903 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:04.903 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:04.903 TEST_HEADER include/spdk/nvme_zns.h 00:02:04.903 TEST_HEADER include/spdk/nvme_spec.h 00:02:04.903 CC app/spdk_tgt/spdk_tgt.o 00:02:04.903 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:05.171 TEST_HEADER include/spdk/nvmf.h 00:02:05.171 TEST_HEADER include/spdk/nvmf_spec.h 00:02:05.171 TEST_HEADER include/spdk/opal.h 00:02:05.171 TEST_HEADER include/spdk/nvmf_transport.h 00:02:05.171 TEST_HEADER include/spdk/pci_ids.h 00:02:05.171 TEST_HEADER include/spdk/opal_spec.h 00:02:05.171 TEST_HEADER include/spdk/pipe.h 00:02:05.171 TEST_HEADER include/spdk/rpc.h 00:02:05.171 TEST_HEADER include/spdk/reduce.h 00:02:05.171 TEST_HEADER include/spdk/scsi.h 00:02:05.171 TEST_HEADER include/spdk/queue.h 00:02:05.171 TEST_HEADER include/spdk/scheduler.h 00:02:05.171 TEST_HEADER include/spdk/sock.h 00:02:05.171 TEST_HEADER include/spdk/scsi_spec.h 00:02:05.171 TEST_HEADER include/spdk/stdinc.h 00:02:05.171 TEST_HEADER include/spdk/string.h 00:02:05.171 TEST_HEADER include/spdk/trace.h 00:02:05.171 TEST_HEADER include/spdk/thread.h 00:02:05.171 TEST_HEADER include/spdk/trace_parser.h 00:02:05.171 TEST_HEADER include/spdk/ublk.h 00:02:05.171 TEST_HEADER include/spdk/util.h 00:02:05.171 TEST_HEADER include/spdk/tree.h 00:02:05.171 TEST_HEADER include/spdk/uuid.h 00:02:05.171 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:05.171 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:05.171 TEST_HEADER include/spdk/vhost.h 00:02:05.171 TEST_HEADER include/spdk/vmd.h 00:02:05.171 TEST_HEADER include/spdk/version.h 00:02:05.171 TEST_HEADER include/spdk/xor.h 00:02:05.171 TEST_HEADER include/spdk/zipf.h 00:02:05.171 CXX test/cpp_headers/accel.o 00:02:05.171 CXX test/cpp_headers/accel_module.o 00:02:05.171 CXX test/cpp_headers/barrier.o 00:02:05.171 CXX test/cpp_headers/assert.o 00:02:05.171 CXX test/cpp_headers/base64.o 00:02:05.171 CXX test/cpp_headers/bdev.o 00:02:05.171 CXX test/cpp_headers/bdev_module.o 00:02:05.171 CXX test/cpp_headers/bit_array.o 00:02:05.171 CXX test/cpp_headers/bdev_zone.o 00:02:05.171 CXX test/cpp_headers/bit_pool.o 00:02:05.171 CXX test/cpp_headers/blob_bdev.o 00:02:05.171 CXX test/cpp_headers/blobfs.o 00:02:05.171 CXX test/cpp_headers/blobfs_bdev.o 00:02:05.171 CXX test/cpp_headers/conf.o 00:02:05.171 CXX test/cpp_headers/config.o 00:02:05.171 CXX test/cpp_headers/blob.o 00:02:05.171 CXX test/cpp_headers/cpuset.o 00:02:05.171 CXX test/cpp_headers/crc16.o 00:02:05.171 CXX test/cpp_headers/crc32.o 00:02:05.171 CXX test/cpp_headers/crc64.o 00:02:05.171 CXX test/cpp_headers/dif.o 00:02:05.171 CXX test/cpp_headers/dma.o 00:02:05.171 CXX test/cpp_headers/env_dpdk.o 00:02:05.171 CXX test/cpp_headers/endian.o 00:02:05.171 CXX test/cpp_headers/fd_group.o 00:02:05.171 CXX test/cpp_headers/fd.o 00:02:05.171 CXX test/cpp_headers/env.o 00:02:05.171 CXX test/cpp_headers/file.o 00:02:05.171 CXX test/cpp_headers/event.o 00:02:05.171 CXX test/cpp_headers/hexlify.o 00:02:05.171 CXX test/cpp_headers/histogram_data.o 00:02:05.171 CXX test/cpp_headers/ftl.o 00:02:05.171 CXX test/cpp_headers/gpt_spec.o 00:02:05.171 CXX test/cpp_headers/init.o 00:02:05.171 CXX test/cpp_headers/idxd.o 00:02:05.171 CXX test/cpp_headers/idxd_spec.o 00:02:05.171 CXX test/cpp_headers/ioat_spec.o 00:02:05.171 CXX test/cpp_headers/iscsi_spec.o 00:02:05.171 CXX test/cpp_headers/ioat.o 00:02:05.171 CXX test/cpp_headers/keyring.o 00:02:05.171 CXX test/cpp_headers/json.o 00:02:05.171 CXX test/cpp_headers/jsonrpc.o 00:02:05.171 CXX test/cpp_headers/keyring_module.o 00:02:05.171 CXX test/cpp_headers/likely.o 00:02:05.171 CXX test/cpp_headers/log.o 00:02:05.171 CXX test/cpp_headers/lvol.o 00:02:05.171 CXX test/cpp_headers/mmio.o 00:02:05.171 CXX test/cpp_headers/memory.o 00:02:05.171 CXX test/cpp_headers/nbd.o 00:02:05.171 CXX test/cpp_headers/notify.o 00:02:05.171 CXX test/cpp_headers/nvme.o 00:02:05.171 CXX test/cpp_headers/nvme_intel.o 00:02:05.171 CXX test/cpp_headers/nvme_ocssd.o 00:02:05.171 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:05.171 CXX test/cpp_headers/nvme_zns.o 00:02:05.171 CXX test/cpp_headers/nvme_spec.o 00:02:05.171 CXX test/cpp_headers/nvmf_cmd.o 00:02:05.171 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:05.171 CXX test/cpp_headers/nvmf_spec.o 00:02:05.171 CXX test/cpp_headers/nvmf.o 00:02:05.171 CXX test/cpp_headers/nvmf_transport.o 00:02:05.171 CXX test/cpp_headers/opal.o 00:02:05.171 CXX test/cpp_headers/pci_ids.o 00:02:05.171 CXX test/cpp_headers/pipe.o 00:02:05.171 CXX test/cpp_headers/queue.o 00:02:05.171 CXX test/cpp_headers/opal_spec.o 00:02:05.171 CC test/thread/poller_perf/poller_perf.o 00:02:05.171 CC examples/ioat/perf/perf.o 00:02:05.171 CC app/fio/nvme/fio_plugin.o 00:02:05.171 CC examples/ioat/verify/verify.o 00:02:05.171 CC examples/util/zipf/zipf.o 00:02:05.171 CC test/app/jsoncat/jsoncat.o 00:02:05.171 CC test/env/memory/memory_ut.o 00:02:05.171 CXX test/cpp_headers/reduce.o 00:02:05.171 CC test/app/histogram_perf/histogram_perf.o 00:02:05.171 CC test/env/pci/pci_ut.o 00:02:05.171 CC test/env/vtophys/vtophys.o 00:02:05.171 CC test/app/stub/stub.o 00:02:05.171 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:05.171 LINK spdk_lspci 00:02:05.171 CC test/dma/test_dma/test_dma.o 00:02:05.171 CC app/fio/bdev/fio_plugin.o 00:02:05.171 CC test/app/bdev_svc/bdev_svc.o 00:02:05.442 LINK spdk_nvme_discover 00:02:05.702 LINK rpc_client_test 00:02:05.702 CC test/env/mem_callbacks/mem_callbacks.o 00:02:05.702 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:05.702 LINK interrupt_tgt 00:02:05.702 LINK nvmf_tgt 00:02:05.702 LINK poller_perf 00:02:05.702 LINK zipf 00:02:05.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:05.702 LINK vtophys 00:02:05.702 LINK spdk_tgt 00:02:05.702 LINK iscsi_tgt 00:02:05.702 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:05.702 CXX test/cpp_headers/rpc.o 00:02:05.702 LINK spdk_trace_record 00:02:05.702 CXX test/cpp_headers/scheduler.o 00:02:05.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:05.702 CXX test/cpp_headers/scsi.o 00:02:05.702 LINK stub 00:02:05.702 CXX test/cpp_headers/scsi_spec.o 00:02:05.702 CXX test/cpp_headers/sock.o 00:02:05.702 CXX test/cpp_headers/stdinc.o 00:02:05.702 CXX test/cpp_headers/string.o 00:02:05.702 CXX test/cpp_headers/thread.o 00:02:05.702 CXX test/cpp_headers/trace.o 00:02:05.702 CXX test/cpp_headers/tree.o 00:02:05.702 LINK ioat_perf 00:02:05.702 CXX test/cpp_headers/ublk.o 00:02:05.702 CXX test/cpp_headers/trace_parser.o 00:02:05.702 LINK jsoncat 00:02:05.702 CXX test/cpp_headers/util.o 00:02:05.702 CXX test/cpp_headers/uuid.o 00:02:05.702 CXX test/cpp_headers/version.o 00:02:05.702 CXX test/cpp_headers/vfio_user_pci.o 00:02:05.702 CXX test/cpp_headers/vfio_user_spec.o 00:02:05.702 CXX test/cpp_headers/vhost.o 00:02:05.702 CXX test/cpp_headers/vmd.o 00:02:05.702 CXX test/cpp_headers/xor.o 00:02:05.702 CXX test/cpp_headers/zipf.o 00:02:05.702 LINK histogram_perf 00:02:05.702 LINK bdev_svc 00:02:05.702 LINK env_dpdk_post_init 00:02:05.702 LINK spdk_dd 00:02:05.702 LINK verify 00:02:05.960 LINK pci_ut 00:02:05.960 LINK spdk_trace 00:02:05.960 LINK test_dma 00:02:06.219 LINK spdk_bdev 00:02:06.219 LINK nvme_fuzz 00:02:06.219 CC examples/sock/hello_world/hello_sock.o 00:02:06.219 CC examples/vmd/lsvmd/lsvmd.o 00:02:06.219 CC test/event/reactor/reactor.o 00:02:06.219 CC examples/idxd/perf/perf.o 00:02:06.219 CC test/event/event_perf/event_perf.o 00:02:06.219 CC examples/vmd/led/led.o 00:02:06.219 CC test/event/app_repeat/app_repeat.o 00:02:06.219 CC test/event/reactor_perf/reactor_perf.o 00:02:06.219 LINK spdk_nvme 00:02:06.219 LINK vhost_fuzz 00:02:06.219 CC examples/thread/thread/thread_ex.o 00:02:06.219 CC test/event/scheduler/scheduler.o 00:02:06.219 LINK spdk_nvme_perf 00:02:06.219 LINK spdk_nvme_identify 00:02:06.219 LINK mem_callbacks 00:02:06.219 LINK spdk_top 00:02:06.219 LINK lsvmd 00:02:06.219 LINK reactor 00:02:06.219 LINK event_perf 00:02:06.219 LINK led 00:02:06.219 LINK reactor_perf 00:02:06.219 CC app/vhost/vhost.o 00:02:06.478 LINK app_repeat 00:02:06.478 LINK hello_sock 00:02:06.478 LINK thread 00:02:06.478 LINK scheduler 00:02:06.478 CC test/nvme/aer/aer.o 00:02:06.478 CC test/nvme/sgl/sgl.o 00:02:06.478 CC test/nvme/reserve/reserve.o 00:02:06.478 LINK idxd_perf 00:02:06.478 CC test/nvme/reset/reset.o 00:02:06.478 CC test/nvme/compliance/nvme_compliance.o 00:02:06.478 CC test/nvme/overhead/overhead.o 00:02:06.478 CC test/nvme/simple_copy/simple_copy.o 00:02:06.478 CC test/nvme/boot_partition/boot_partition.o 00:02:06.478 CC test/nvme/err_injection/err_injection.o 00:02:06.478 CC test/nvme/e2edp/nvme_dp.o 00:02:06.478 CC test/nvme/fdp/fdp.o 00:02:06.478 CC test/nvme/connect_stress/connect_stress.o 00:02:06.478 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.478 CC test/nvme/startup/startup.o 00:02:06.478 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.478 CC test/nvme/cuse/cuse.o 00:02:06.478 CC test/blobfs/mkfs/mkfs.o 00:02:06.478 CC test/accel/dif/dif.o 00:02:06.478 LINK memory_ut 00:02:06.478 LINK vhost 00:02:06.478 CC test/lvol/esnap/esnap.o 00:02:06.747 LINK reserve 00:02:06.747 LINK boot_partition 00:02:06.747 LINK connect_stress 00:02:06.747 LINK startup 00:02:06.747 LINK doorbell_aers 00:02:06.747 LINK err_injection 00:02:06.747 LINK fused_ordering 00:02:06.747 LINK reset 00:02:06.747 LINK simple_copy 00:02:06.747 LINK sgl 00:02:06.747 LINK aer 00:02:06.747 LINK mkfs 00:02:06.747 LINK overhead 00:02:06.747 LINK nvme_dp 00:02:06.747 LINK nvme_compliance 00:02:06.747 LINK fdp 00:02:06.747 CC examples/nvme/arbitration/arbitration.o 00:02:06.747 CC examples/nvme/reconnect/reconnect.o 00:02:06.747 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:06.747 CC examples/nvme/hello_world/hello_world.o 00:02:06.747 CC examples/nvme/hotplug/hotplug.o 00:02:06.747 CC examples/nvme/abort/abort.o 00:02:06.747 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:06.747 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:07.006 CC examples/accel/perf/accel_perf.o 00:02:07.006 LINK dif 00:02:07.006 CC examples/blob/cli/blobcli.o 00:02:07.006 CC examples/blob/hello_world/hello_blob.o 00:02:07.006 LINK pmr_persistence 00:02:07.006 LINK cmb_copy 00:02:07.006 LINK hello_world 00:02:07.006 LINK hotplug 00:02:07.006 LINK arbitration 00:02:07.006 LINK reconnect 00:02:07.006 LINK abort 00:02:07.006 LINK iscsi_fuzz 00:02:07.264 LINK hello_blob 00:02:07.264 LINK nvme_manage 00:02:07.264 LINK accel_perf 00:02:07.264 LINK blobcli 00:02:07.264 CC test/bdev/bdevio/bdevio.o 00:02:07.523 LINK cuse 00:02:07.781 LINK bdevio 00:02:07.781 CC examples/bdev/bdevperf/bdevperf.o 00:02:07.781 CC examples/bdev/hello_world/hello_bdev.o 00:02:08.041 LINK hello_bdev 00:02:08.300 LINK bdevperf 00:02:08.870 CC examples/nvmf/nvmf/nvmf.o 00:02:09.129 LINK nvmf 00:02:10.063 LINK esnap 00:02:10.322 00:02:10.322 real 0m43.463s 00:02:10.322 user 6m30.138s 00:02:10.322 sys 3m19.746s 00:02:10.322 20:27:44 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:10.322 20:27:44 make -- common/autotest_common.sh@10 -- $ set +x 00:02:10.322 ************************************ 00:02:10.322 END TEST make 00:02:10.322 ************************************ 00:02:10.322 20:27:44 -- common/autotest_common.sh@1142 -- $ return 0 00:02:10.322 20:27:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:10.322 20:27:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:10.322 20:27:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:10.322 20:27:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.322 20:27:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:10.322 20:27:44 -- pm/common@44 -- $ pid=2398239 00:02:10.322 20:27:44 -- pm/common@50 -- $ kill -TERM 2398239 00:02:10.322 20:27:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.322 20:27:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:10.322 20:27:44 -- pm/common@44 -- $ pid=2398240 00:02:10.322 20:27:44 -- pm/common@50 -- $ kill -TERM 2398240 00:02:10.322 20:27:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.322 20:27:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:10.322 20:27:44 -- pm/common@44 -- $ pid=2398242 00:02:10.322 20:27:44 -- pm/common@50 -- $ kill -TERM 2398242 00:02:10.322 20:27:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.322 20:27:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:10.322 20:27:44 -- pm/common@44 -- $ pid=2398269 00:02:10.322 20:27:44 -- pm/common@50 -- $ sudo -E kill -TERM 2398269 00:02:10.322 20:27:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:10.322 20:27:44 -- nvmf/common.sh@7 -- # uname -s 00:02:10.322 20:27:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:10.322 20:27:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:10.322 20:27:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:10.322 20:27:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:10.322 20:27:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:10.322 20:27:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:10.322 20:27:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:10.322 20:27:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:10.322 20:27:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:10.322 20:27:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:10.322 20:27:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:10.323 20:27:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:10.323 20:27:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:10.323 20:27:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:10.323 20:27:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:10.323 20:27:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:10.323 20:27:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:10.323 20:27:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:10.323 20:27:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.323 20:27:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.323 20:27:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.323 20:27:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.323 20:27:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.323 20:27:44 -- paths/export.sh@5 -- # export PATH 00:02:10.323 20:27:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.323 20:27:44 -- nvmf/common.sh@47 -- # : 0 00:02:10.323 20:27:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:10.323 20:27:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:10.323 20:27:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:10.323 20:27:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:10.323 20:27:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:10.323 20:27:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:10.323 20:27:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:10.323 20:27:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:10.323 20:27:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:10.323 20:27:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:10.323 20:27:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:10.323 20:27:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:10.323 20:27:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:10.323 20:27:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:10.323 20:27:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:10.323 20:27:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:10.323 20:27:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:10.323 20:27:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:10.323 20:27:44 -- spdk/autotest.sh@48 -- # udevadm_pid=2457539 00:02:10.323 20:27:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:10.323 20:27:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:10.323 20:27:44 -- pm/common@17 -- # local monitor 00:02:10.323 20:27:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.323 20:27:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.323 20:27:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.323 20:27:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.323 20:27:44 -- pm/common@21 -- # date +%s 00:02:10.323 20:27:44 -- pm/common@21 -- # date +%s 00:02:10.323 20:27:44 -- pm/common@25 -- # sleep 1 00:02:10.323 20:27:44 -- pm/common@21 -- # date +%s 00:02:10.323 20:27:44 -- pm/common@21 -- # date +%s 00:02:10.323 20:27:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721068064 00:02:10.323 20:27:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721068064 00:02:10.323 20:27:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721068064 00:02:10.323 20:27:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721068064 00:02:10.323 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721068064_collect-vmstat.pm.log 00:02:10.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721068064_collect-cpu-load.pm.log 00:02:10.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721068064_collect-cpu-temp.pm.log 00:02:10.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721068064_collect-bmc-pm.bmc.pm.log 00:02:11.518 20:27:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:11.518 20:27:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:11.518 20:27:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:11.518 20:27:45 -- common/autotest_common.sh@10 -- # set +x 00:02:11.518 20:27:45 -- spdk/autotest.sh@59 -- # create_test_list 00:02:11.518 20:27:45 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:11.518 20:27:45 -- common/autotest_common.sh@10 -- # set +x 00:02:11.518 20:27:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:11.518 20:27:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.518 20:27:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.518 20:27:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:11.518 20:27:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.518 20:27:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:11.518 20:27:45 -- common/autotest_common.sh@1455 -- # uname 00:02:11.519 20:27:45 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:11.519 20:27:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:11.519 20:27:45 -- common/autotest_common.sh@1475 -- # uname 00:02:11.519 20:27:45 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:11.519 20:27:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:11.519 20:27:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:11.519 20:27:45 -- spdk/autotest.sh@72 -- # hash lcov 00:02:11.519 20:27:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:11.519 20:27:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:11.519 --rc lcov_branch_coverage=1 00:02:11.519 --rc lcov_function_coverage=1 00:02:11.519 --rc genhtml_branch_coverage=1 00:02:11.519 --rc genhtml_function_coverage=1 00:02:11.519 --rc genhtml_legend=1 00:02:11.519 --rc geninfo_all_blocks=1 00:02:11.519 ' 00:02:11.519 20:27:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:11.519 --rc lcov_branch_coverage=1 00:02:11.519 --rc lcov_function_coverage=1 00:02:11.519 --rc genhtml_branch_coverage=1 00:02:11.519 --rc genhtml_function_coverage=1 00:02:11.519 --rc genhtml_legend=1 00:02:11.519 --rc geninfo_all_blocks=1 00:02:11.519 ' 00:02:11.519 20:27:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:11.519 --rc lcov_branch_coverage=1 00:02:11.519 --rc lcov_function_coverage=1 00:02:11.519 --rc genhtml_branch_coverage=1 00:02:11.519 --rc genhtml_function_coverage=1 00:02:11.519 --rc genhtml_legend=1 00:02:11.519 --rc geninfo_all_blocks=1 00:02:11.519 --no-external' 00:02:11.519 20:27:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:11.519 --rc lcov_branch_coverage=1 00:02:11.519 --rc lcov_function_coverage=1 00:02:11.519 --rc genhtml_branch_coverage=1 00:02:11.519 --rc genhtml_function_coverage=1 00:02:11.519 --rc genhtml_legend=1 00:02:11.519 --rc geninfo_all_blocks=1 00:02:11.519 --no-external' 00:02:11.519 20:27:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:11.519 lcov: LCOV version 1.14 00:02:11.519 20:27:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:23.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:23.726 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:31.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:31.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:31.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:32.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:32.131 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:32.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:32.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:35.683 20:28:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:35.683 20:28:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:35.683 20:28:09 -- common/autotest_common.sh@10 -- # set +x 00:02:35.683 20:28:09 -- spdk/autotest.sh@91 -- # rm -f 00:02:35.683 20:28:09 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:38.219 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:38.219 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:38.219 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:38.219 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:38.219 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:38.478 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:38.737 20:28:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:38.737 20:28:12 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:38.737 20:28:12 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:38.737 20:28:12 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:38.737 20:28:12 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:38.737 20:28:12 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:38.737 20:28:12 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:38.737 20:28:12 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.737 20:28:12 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:38.737 20:28:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:38.737 20:28:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:38.737 20:28:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:38.737 20:28:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:38.737 20:28:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:38.737 20:28:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:38.737 No valid GPT data, bailing 00:02:38.737 20:28:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:38.737 20:28:13 -- scripts/common.sh@391 -- # pt= 00:02:38.737 20:28:13 -- scripts/common.sh@392 -- # return 1 00:02:38.737 20:28:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:38.737 1+0 records in 00:02:38.737 1+0 records out 00:02:38.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0016636 s, 630 MB/s 00:02:38.737 20:28:13 -- spdk/autotest.sh@118 -- # sync 00:02:38.737 20:28:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:38.737 20:28:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:38.737 20:28:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:44.028 20:28:18 -- spdk/autotest.sh@124 -- # uname -s 00:02:44.028 20:28:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:44.028 20:28:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:44.028 20:28:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.028 20:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.028 20:28:18 -- common/autotest_common.sh@10 -- # set +x 00:02:44.028 ************************************ 00:02:44.028 START TEST setup.sh 00:02:44.028 ************************************ 00:02:44.028 20:28:18 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:44.028 * Looking for test storage... 00:02:44.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:44.028 20:28:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:44.028 20:28:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:44.028 20:28:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:44.028 20:28:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.028 20:28:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.028 20:28:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:44.028 ************************************ 00:02:44.028 START TEST acl 00:02:44.028 ************************************ 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:44.029 * Looking for test storage... 00:02:44.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:44.029 20:28:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.029 20:28:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:44.029 20:28:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:44.029 20:28:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:44.029 20:28:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:44.029 20:28:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:44.029 20:28:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:44.029 20:28:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.029 20:28:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.316 20:28:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:47.317 20:28:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:47.317 20:28:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.317 20:28:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:47.317 20:28:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.317 20:28:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:49.849 Hugepages 00:02:49.849 node hugesize free / total 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 00:02:49.849 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.849 20:28:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:49.849 20:28:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:49.849 20:28:24 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.849 20:28:24 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.849 20:28:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:49.849 ************************************ 00:02:49.849 START TEST denied 00:02:49.849 ************************************ 00:02:49.849 20:28:24 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:49.849 20:28:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:49.849 20:28:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:49.849 20:28:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.849 20:28:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.849 20:28:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:53.128 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:53.128 20:28:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:53.128 20:28:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:53.128 20:28:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:53.128 20:28:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:53.128 20:28:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:53.128 20:28:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:53.129 20:28:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:53.129 20:28:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:53.129 20:28:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.129 20:28:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.319 00:02:57.319 real 0m6.778s 00:02:57.319 user 0m2.232s 00:02:57.319 sys 0m3.843s 00:02:57.319 20:28:30 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.319 20:28:30 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:57.319 ************************************ 00:02:57.319 END TEST denied 00:02:57.319 ************************************ 00:02:57.319 20:28:31 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:57.319 20:28:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:57.319 20:28:31 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.319 20:28:31 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.319 20:28:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:57.319 ************************************ 00:02:57.319 START TEST allowed 00:02:57.319 ************************************ 00:02:57.319 20:28:31 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:57.319 20:28:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:57.319 20:28:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:57.319 20:28:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.319 20:28:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.319 20:28:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:00.635 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:00.635 20:28:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:00.635 20:28:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:00.635 20:28:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:00.635 20:28:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.635 20:28:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.166 00:03:03.166 real 0m6.605s 00:03:03.166 user 0m2.041s 00:03:03.166 sys 0m3.701s 00:03:03.166 20:28:37 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.166 20:28:37 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:03.166 ************************************ 00:03:03.166 END TEST allowed 00:03:03.166 ************************************ 00:03:03.426 20:28:37 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:03.426 00:03:03.426 real 0m19.310s 00:03:03.426 user 0m6.502s 00:03:03.426 sys 0m11.418s 00:03:03.426 20:28:37 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.426 20:28:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:03.426 ************************************ 00:03:03.426 END TEST acl 00:03:03.426 ************************************ 00:03:03.426 20:28:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:03.426 20:28:37 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:03.426 20:28:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.426 20:28:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.426 20:28:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:03.426 ************************************ 00:03:03.426 START TEST hugepages 00:03:03.426 ************************************ 00:03:03.426 20:28:37 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:03.426 * Looking for test storage... 00:03:03.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173401884 kB' 'MemAvailable: 176276368 kB' 'Buffers: 3896 kB' 'Cached: 10132016 kB' 'SwapCached: 0 kB' 'Active: 7144764 kB' 'Inactive: 3507524 kB' 'Active(anon): 6752756 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519764 kB' 'Mapped: 169976 kB' 'Shmem: 6236380 kB' 'KReclaimable: 238808 kB' 'Slab: 827996 kB' 'SReclaimable: 238808 kB' 'SUnreclaim: 589188 kB' 'KernelStack: 20512 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8289148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315420 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.426 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.427 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:03.428 20:28:37 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:03.428 20:28:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.428 20:28:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.428 20:28:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.428 ************************************ 00:03:03.428 START TEST default_setup 00:03:03.428 ************************************ 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.428 20:28:37 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.963 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:05.963 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.901 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.901 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175551332 kB' 'MemAvailable: 178425820 kB' 'Buffers: 3896 kB' 'Cached: 10132116 kB' 'SwapCached: 0 kB' 'Active: 7164424 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772416 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538960 kB' 'Mapped: 170168 kB' 'Shmem: 6236480 kB' 'KReclaimable: 238816 kB' 'Slab: 825692 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586876 kB' 'KernelStack: 20496 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8306416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.902 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175552136 kB' 'MemAvailable: 178426624 kB' 'Buffers: 3896 kB' 'Cached: 10132120 kB' 'SwapCached: 0 kB' 'Active: 7164176 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772168 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539184 kB' 'Mapped: 170060 kB' 'Shmem: 6236484 kB' 'KReclaimable: 238816 kB' 'Slab: 825684 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586868 kB' 'KernelStack: 20496 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8306436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.903 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.904 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175552028 kB' 'MemAvailable: 178426516 kB' 'Buffers: 3896 kB' 'Cached: 10132136 kB' 'SwapCached: 0 kB' 'Active: 7164184 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772176 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539184 kB' 'Mapped: 170060 kB' 'Shmem: 6236500 kB' 'KReclaimable: 238816 kB' 'Slab: 825684 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586868 kB' 'KernelStack: 20496 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8306456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.905 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.906 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.907 nr_hugepages=1024 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.907 resv_hugepages=0 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.907 surplus_hugepages=0 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.907 anon_hugepages=0 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175552028 kB' 'MemAvailable: 178426516 kB' 'Buffers: 3896 kB' 'Cached: 10132160 kB' 'SwapCached: 0 kB' 'Active: 7163780 kB' 'Inactive: 3507524 kB' 'Active(anon): 6771772 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538732 kB' 'Mapped: 170060 kB' 'Shmem: 6236524 kB' 'KReclaimable: 238816 kB' 'Slab: 825684 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586868 kB' 'KernelStack: 20480 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8306480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.907 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.167 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85695484 kB' 'MemUsed: 11967200 kB' 'SwapCached: 0 kB' 'Active: 5004592 kB' 'Inactive: 3336416 kB' 'Active(anon): 4847052 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8182888 kB' 'Mapped: 74420 kB' 'AnonPages: 161308 kB' 'Shmem: 4688932 kB' 'KernelStack: 10552 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129360 kB' 'Slab: 414300 kB' 'SReclaimable: 129360 kB' 'SUnreclaim: 284940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.168 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:07.169 node0=1024 expecting 1024 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:07.169 00:03:07.169 real 0m3.554s 00:03:07.169 user 0m1.075s 00:03:07.169 sys 0m1.713s 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:07.169 20:28:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:07.169 ************************************ 00:03:07.169 END TEST default_setup 00:03:07.169 ************************************ 00:03:07.169 20:28:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:07.169 20:28:41 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:07.169 20:28:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.169 20:28:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.169 20:28:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.169 ************************************ 00:03:07.169 START TEST per_node_1G_alloc 00:03:07.169 ************************************ 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.169 20:28:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.698 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.698 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.698 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.698 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.698 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.699 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175568264 kB' 'MemAvailable: 178442752 kB' 'Buffers: 3896 kB' 'Cached: 10132248 kB' 'SwapCached: 0 kB' 'Active: 7169004 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776996 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543352 kB' 'Mapped: 170776 kB' 'Shmem: 6236612 kB' 'KReclaimable: 238816 kB' 'Slab: 825584 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586768 kB' 'KernelStack: 20672 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8311064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.963 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175565936 kB' 'MemAvailable: 178440424 kB' 'Buffers: 3896 kB' 'Cached: 10132264 kB' 'SwapCached: 0 kB' 'Active: 7170056 kB' 'Inactive: 3507524 kB' 'Active(anon): 6778048 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544648 kB' 'Mapped: 170148 kB' 'Shmem: 6236628 kB' 'KReclaimable: 238816 kB' 'Slab: 825600 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586784 kB' 'KernelStack: 20960 kB' 'PageTables: 9612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8304572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315616 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.964 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.965 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175565632 kB' 'MemAvailable: 178440120 kB' 'Buffers: 3896 kB' 'Cached: 10132280 kB' 'SwapCached: 0 kB' 'Active: 7164172 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772164 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538716 kB' 'Mapped: 169336 kB' 'Shmem: 6236644 kB' 'KReclaimable: 238816 kB' 'Slab: 825480 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586664 kB' 'KernelStack: 20864 kB' 'PageTables: 9620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.966 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.967 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.968 nr_hugepages=1024 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.968 resv_hugepages=0 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.968 surplus_hugepages=0 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.968 anon_hugepages=0 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175565368 kB' 'MemAvailable: 178439856 kB' 'Buffers: 3896 kB' 'Cached: 10132304 kB' 'SwapCached: 0 kB' 'Active: 7163584 kB' 'Inactive: 3507524 kB' 'Active(anon): 6771576 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538172 kB' 'Mapped: 169328 kB' 'Shmem: 6236668 kB' 'KReclaimable: 238816 kB' 'Slab: 825480 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586664 kB' 'KernelStack: 20816 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.968 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.969 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86745888 kB' 'MemUsed: 10916796 kB' 'SwapCached: 0 kB' 'Active: 5003588 kB' 'Inactive: 3336416 kB' 'Active(anon): 4846048 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8182912 kB' 'Mapped: 74012 kB' 'AnonPages: 160232 kB' 'Shmem: 4688956 kB' 'KernelStack: 10728 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129360 kB' 'Slab: 414284 kB' 'SReclaimable: 129360 kB' 'SUnreclaim: 284924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.970 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.971 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88818228 kB' 'MemUsed: 4900240 kB' 'SwapCached: 0 kB' 'Active: 2159368 kB' 'Inactive: 171108 kB' 'Active(anon): 1924900 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1953328 kB' 'Mapped: 94900 kB' 'AnonPages: 377288 kB' 'Shmem: 1547752 kB' 'KernelStack: 9944 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109456 kB' 'Slab: 411196 kB' 'SReclaimable: 109456 kB' 'SUnreclaim: 301740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.972 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.232 node0=512 expecting 512 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:10.232 node1=512 expecting 512 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:10.232 00:03:10.232 real 0m2.946s 00:03:10.232 user 0m1.215s 00:03:10.232 sys 0m1.786s 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.232 20:28:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.232 ************************************ 00:03:10.232 END TEST per_node_1G_alloc 00:03:10.232 ************************************ 00:03:10.232 20:28:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:10.232 20:28:44 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:10.232 20:28:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.232 20:28:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.232 20:28:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.232 ************************************ 00:03:10.232 START TEST even_2G_alloc 00:03:10.232 ************************************ 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.232 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.233 20:28:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.771 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.771 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.771 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.771 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.771 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.771 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.771 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.771 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.772 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175549420 kB' 'MemAvailable: 178423908 kB' 'Buffers: 3896 kB' 'Cached: 10132408 kB' 'SwapCached: 0 kB' 'Active: 7163960 kB' 'Inactive: 3507524 kB' 'Active(anon): 6771952 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537868 kB' 'Mapped: 168968 kB' 'Shmem: 6236772 kB' 'KReclaimable: 238816 kB' 'Slab: 825532 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586716 kB' 'KernelStack: 20816 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315804 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.772 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.773 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175552728 kB' 'MemAvailable: 178427216 kB' 'Buffers: 3896 kB' 'Cached: 10132412 kB' 'SwapCached: 0 kB' 'Active: 7164572 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772564 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538568 kB' 'Mapped: 168968 kB' 'Shmem: 6236776 kB' 'KReclaimable: 238816 kB' 'Slab: 825468 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586652 kB' 'KernelStack: 20688 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315708 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.037 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.038 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175554820 kB' 'MemAvailable: 178429308 kB' 'Buffers: 3896 kB' 'Cached: 10132428 kB' 'SwapCached: 0 kB' 'Active: 7162880 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770872 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537348 kB' 'Mapped: 168936 kB' 'Shmem: 6236792 kB' 'KReclaimable: 238816 kB' 'Slab: 825516 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586700 kB' 'KernelStack: 20384 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.039 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.040 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.041 nr_hugepages=1024 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.041 resv_hugepages=0 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.041 surplus_hugepages=0 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.041 anon_hugepages=0 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175554876 kB' 'MemAvailable: 178429364 kB' 'Buffers: 3896 kB' 'Cached: 10132428 kB' 'SwapCached: 0 kB' 'Active: 7162544 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770536 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536972 kB' 'Mapped: 168936 kB' 'Shmem: 6236792 kB' 'KReclaimable: 238816 kB' 'Slab: 825548 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586732 kB' 'KernelStack: 20464 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.041 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.042 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86738592 kB' 'MemUsed: 10924092 kB' 'SwapCached: 0 kB' 'Active: 5002564 kB' 'Inactive: 3336416 kB' 'Active(anon): 4845024 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8182916 kB' 'Mapped: 74012 kB' 'AnonPages: 159212 kB' 'Shmem: 4688960 kB' 'KernelStack: 10520 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129360 kB' 'Slab: 414532 kB' 'SReclaimable: 129360 kB' 'SUnreclaim: 285172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.043 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88816404 kB' 'MemUsed: 4902064 kB' 'SwapCached: 0 kB' 'Active: 2159644 kB' 'Inactive: 171108 kB' 'Active(anon): 1925176 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1953472 kB' 'Mapped: 94924 kB' 'AnonPages: 377340 kB' 'Shmem: 1547896 kB' 'KernelStack: 9928 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109456 kB' 'Slab: 411016 kB' 'SReclaimable: 109456 kB' 'SUnreclaim: 301560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.044 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:13.045 node0=512 expecting 512 00:03:13.045 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.046 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.046 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.046 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:13.046 node1=512 expecting 512 00:03:13.046 20:28:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:13.046 00:03:13.046 real 0m2.867s 00:03:13.046 user 0m1.166s 00:03:13.046 sys 0m1.768s 00:03:13.046 20:28:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.046 20:28:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.046 ************************************ 00:03:13.046 END TEST even_2G_alloc 00:03:13.046 ************************************ 00:03:13.046 20:28:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:13.046 20:28:47 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:13.046 20:28:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.046 20:28:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.046 20:28:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.046 ************************************ 00:03:13.046 START TEST odd_alloc 00:03:13.046 ************************************ 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.046 20:28:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.584 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.584 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.584 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.584 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:15.584 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.584 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.584 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.584 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.584 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.584 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.584 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175542008 kB' 'MemAvailable: 178416496 kB' 'Buffers: 3896 kB' 'Cached: 10132552 kB' 'SwapCached: 0 kB' 'Active: 7165328 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773320 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539348 kB' 'Mapped: 169536 kB' 'Shmem: 6236916 kB' 'KReclaimable: 238816 kB' 'Slab: 825400 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586584 kB' 'KernelStack: 20448 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8301120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.585 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.586 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175543324 kB' 'MemAvailable: 178417812 kB' 'Buffers: 3896 kB' 'Cached: 10132556 kB' 'SwapCached: 0 kB' 'Active: 7168428 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776420 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543352 kB' 'Mapped: 169452 kB' 'Shmem: 6236920 kB' 'KReclaimable: 238816 kB' 'Slab: 825384 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586568 kB' 'KernelStack: 20608 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8305376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315552 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.587 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.588 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175554412 kB' 'MemAvailable: 178428900 kB' 'Buffers: 3896 kB' 'Cached: 10132592 kB' 'SwapCached: 0 kB' 'Active: 7168588 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776580 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542960 kB' 'Mapped: 169844 kB' 'Shmem: 6236956 kB' 'KReclaimable: 238816 kB' 'Slab: 825096 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586280 kB' 'KernelStack: 20608 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8305396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.589 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.590 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.591 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:15.592 nr_hugepages=1025 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.592 resv_hugepages=0 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.592 surplus_hugepages=0 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.592 anon_hugepages=0 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175561312 kB' 'MemAvailable: 178435800 kB' 'Buffers: 3896 kB' 'Cached: 10132592 kB' 'SwapCached: 0 kB' 'Active: 7164264 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772256 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538656 kB' 'Mapped: 168948 kB' 'Shmem: 6236956 kB' 'KReclaimable: 238816 kB' 'Slab: 824936 kB' 'SReclaimable: 238816 kB' 'SUnreclaim: 586120 kB' 'KernelStack: 20672 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8299296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.592 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.593 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.594 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86739504 kB' 'MemUsed: 10923180 kB' 'SwapCached: 0 kB' 'Active: 5003764 kB' 'Inactive: 3336416 kB' 'Active(anon): 4846224 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8182920 kB' 'Mapped: 74012 kB' 'AnonPages: 160472 kB' 'Shmem: 4688964 kB' 'KernelStack: 11032 kB' 'PageTables: 5288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129360 kB' 'Slab: 413948 kB' 'SReclaimable: 129360 kB' 'SUnreclaim: 284588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.595 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.596 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88822136 kB' 'MemUsed: 4896332 kB' 'SwapCached: 0 kB' 'Active: 2160392 kB' 'Inactive: 171108 kB' 'Active(anon): 1925924 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1953612 kB' 'Mapped: 94936 kB' 'AnonPages: 378084 kB' 'Shmem: 1548036 kB' 'KernelStack: 9912 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109456 kB' 'Slab: 410988 kB' 'SReclaimable: 109456 kB' 'SUnreclaim: 301532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.597 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.598 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:15.599 node0=512 expecting 513 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:15.599 node1=513 expecting 512 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:15.599 00:03:15.599 real 0m2.363s 00:03:15.599 user 0m0.837s 00:03:15.599 sys 0m1.334s 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.599 20:28:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.599 ************************************ 00:03:15.599 END TEST odd_alloc 00:03:15.599 ************************************ 00:03:15.599 20:28:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:15.599 20:28:49 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:15.599 20:28:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.599 20:28:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.599 20:28:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.599 ************************************ 00:03:15.599 START TEST custom_alloc 00:03:15.599 ************************************ 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.599 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.600 20:28:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.135 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.135 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.135 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.135 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:18.135 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:18.135 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.135 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.135 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174505704 kB' 'MemAvailable: 177380144 kB' 'Buffers: 3896 kB' 'Cached: 10132712 kB' 'SwapCached: 0 kB' 'Active: 7161660 kB' 'Inactive: 3507524 kB' 'Active(anon): 6769652 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535344 kB' 'Mapped: 168968 kB' 'Shmem: 6237076 kB' 'KReclaimable: 238720 kB' 'Slab: 824716 kB' 'SReclaimable: 238720 kB' 'SUnreclaim: 585996 kB' 'KernelStack: 20752 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8299248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.136 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174505668 kB' 'MemAvailable: 177380092 kB' 'Buffers: 3896 kB' 'Cached: 10132716 kB' 'SwapCached: 0 kB' 'Active: 7161600 kB' 'Inactive: 3507524 kB' 'Active(anon): 6769592 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535824 kB' 'Mapped: 168960 kB' 'Shmem: 6237080 kB' 'KReclaimable: 238688 kB' 'Slab: 824736 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 586048 kB' 'KernelStack: 20784 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8297776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.137 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.138 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174504772 kB' 'MemAvailable: 177379196 kB' 'Buffers: 3896 kB' 'Cached: 10132732 kB' 'SwapCached: 0 kB' 'Active: 7161520 kB' 'Inactive: 3507524 kB' 'Active(anon): 6769512 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535748 kB' 'Mapped: 168960 kB' 'Shmem: 6237096 kB' 'KReclaimable: 238688 kB' 'Slab: 824736 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 586048 kB' 'KernelStack: 20592 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8299288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.139 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.140 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:18.141 nr_hugepages=1536 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.141 resv_hugepages=0 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.141 surplus_hugepages=0 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.141 anon_hugepages=0 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174504512 kB' 'MemAvailable: 177378936 kB' 'Buffers: 3896 kB' 'Cached: 10132756 kB' 'SwapCached: 0 kB' 'Active: 7161612 kB' 'Inactive: 3507524 kB' 'Active(anon): 6769604 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535820 kB' 'Mapped: 168960 kB' 'Shmem: 6237120 kB' 'KReclaimable: 238688 kB' 'Slab: 824736 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 586048 kB' 'KernelStack: 20720 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8299308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.141 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.142 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86725720 kB' 'MemUsed: 10936964 kB' 'SwapCached: 0 kB' 'Active: 4999948 kB' 'Inactive: 3336416 kB' 'Active(anon): 4842408 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8182920 kB' 'Mapped: 74012 kB' 'AnonPages: 156544 kB' 'Shmem: 4688964 kB' 'KernelStack: 10696 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129232 kB' 'Slab: 413700 kB' 'SReclaimable: 129232 kB' 'SUnreclaim: 284468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.143 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87775304 kB' 'MemUsed: 5943164 kB' 'SwapCached: 0 kB' 'Active: 2161344 kB' 'Inactive: 171108 kB' 'Active(anon): 1926876 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1953752 kB' 'Mapped: 94948 kB' 'AnonPages: 378948 kB' 'Shmem: 1548176 kB' 'KernelStack: 9944 kB' 'PageTables: 4784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109456 kB' 'Slab: 411028 kB' 'SReclaimable: 109456 kB' 'SUnreclaim: 301572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.144 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.145 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.404 node0=512 expecting 512 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:18.404 node1=1024 expecting 1024 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:18.404 00:03:18.404 real 0m2.740s 00:03:18.404 user 0m1.111s 00:03:18.404 sys 0m1.665s 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.404 20:28:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.404 ************************************ 00:03:18.404 END TEST custom_alloc 00:03:18.404 ************************************ 00:03:18.404 20:28:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:18.404 20:28:52 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:18.404 20:28:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.404 20:28:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.404 20:28:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.404 ************************************ 00:03:18.404 START TEST no_shrink_alloc 00:03:18.404 ************************************ 00:03:18.404 20:28:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.405 20:28:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.004 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.004 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.004 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175546004 kB' 'MemAvailable: 178420428 kB' 'Buffers: 3896 kB' 'Cached: 10132852 kB' 'SwapCached: 0 kB' 'Active: 7163412 kB' 'Inactive: 3507524 kB' 'Active(anon): 6771404 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537504 kB' 'Mapped: 168932 kB' 'Shmem: 6237216 kB' 'KReclaimable: 238688 kB' 'Slab: 824720 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 586032 kB' 'KernelStack: 20992 kB' 'PageTables: 10176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.004 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.005 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175549016 kB' 'MemAvailable: 178423440 kB' 'Buffers: 3896 kB' 'Cached: 10132856 kB' 'SwapCached: 0 kB' 'Active: 7162024 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770016 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536104 kB' 'Mapped: 168928 kB' 'Shmem: 6237220 kB' 'KReclaimable: 238688 kB' 'Slab: 825044 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 586356 kB' 'KernelStack: 20464 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315420 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.006 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.007 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175549708 kB' 'MemAvailable: 178424132 kB' 'Buffers: 3896 kB' 'Cached: 10132884 kB' 'SwapCached: 0 kB' 'Active: 7162296 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770288 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536456 kB' 'Mapped: 168928 kB' 'Shmem: 6237248 kB' 'KReclaimable: 238688 kB' 'Slab: 825036 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 586348 kB' 'KernelStack: 20592 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315436 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.008 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.009 nr_hugepages=1024 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.009 resv_hugepages=0 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.009 surplus_hugepages=0 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.009 anon_hugepages=0 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.009 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175550152 kB' 'MemAvailable: 178424576 kB' 'Buffers: 3896 kB' 'Cached: 10132904 kB' 'SwapCached: 0 kB' 'Active: 7162368 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770360 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536480 kB' 'Mapped: 168928 kB' 'Shmem: 6237268 kB' 'KReclaimable: 238688 kB' 'Slab: 825036 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 586348 kB' 'KernelStack: 20592 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315436 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.270 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.271 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85671432 kB' 'MemUsed: 11991252 kB' 'SwapCached: 0 kB' 'Active: 5001520 kB' 'Inactive: 3336416 kB' 'Active(anon): 4843980 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8182956 kB' 'Mapped: 73968 kB' 'AnonPages: 158236 kB' 'Shmem: 4689000 kB' 'KernelStack: 10568 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129232 kB' 'Slab: 413760 kB' 'SReclaimable: 129232 kB' 'SUnreclaim: 284528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.272 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.273 node0=1024 expecting 1024 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.273 20:28:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.810 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.810 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.810 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.810 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175531240 kB' 'MemAvailable: 178405664 kB' 'Buffers: 3896 kB' 'Cached: 10132988 kB' 'SwapCached: 0 kB' 'Active: 7163116 kB' 'Inactive: 3507524 kB' 'Active(anon): 6771108 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536968 kB' 'Mapped: 169004 kB' 'Shmem: 6237352 kB' 'KReclaimable: 238688 kB' 'Slab: 824412 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 585724 kB' 'KernelStack: 20848 kB' 'PageTables: 9424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8300764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175533544 kB' 'MemAvailable: 178407968 kB' 'Buffers: 3896 kB' 'Cached: 10132992 kB' 'SwapCached: 0 kB' 'Active: 7162036 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770028 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535912 kB' 'Mapped: 168980 kB' 'Shmem: 6237356 kB' 'KReclaimable: 238688 kB' 'Slab: 824616 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 585928 kB' 'KernelStack: 20544 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175534764 kB' 'MemAvailable: 178409188 kB' 'Buffers: 3896 kB' 'Cached: 10133012 kB' 'SwapCached: 0 kB' 'Active: 7162204 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770196 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536148 kB' 'Mapped: 168980 kB' 'Shmem: 6237376 kB' 'KReclaimable: 238688 kB' 'Slab: 824588 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 585900 kB' 'KernelStack: 20544 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.816 nr_hugepages=1024 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.816 resv_hugepages=0 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.816 surplus_hugepages=0 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.816 anon_hugepages=0 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175534764 kB' 'MemAvailable: 178409188 kB' 'Buffers: 3896 kB' 'Cached: 10133032 kB' 'SwapCached: 0 kB' 'Active: 7162276 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770268 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536144 kB' 'Mapped: 168980 kB' 'Shmem: 6237396 kB' 'KReclaimable: 238688 kB' 'Slab: 824588 kB' 'SReclaimable: 238688 kB' 'SUnreclaim: 585900 kB' 'KernelStack: 20544 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3017684 kB' 'DirectMap2M: 16584704 kB' 'DirectMap1G: 182452224 kB' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.818 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85666052 kB' 'MemUsed: 11996632 kB' 'SwapCached: 0 kB' 'Active: 5001652 kB' 'Inactive: 3336416 kB' 'Active(anon): 4844112 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8182976 kB' 'Mapped: 74012 kB' 'AnonPages: 158220 kB' 'Shmem: 4689020 kB' 'KernelStack: 10616 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129232 kB' 'Slab: 413172 kB' 'SReclaimable: 129232 kB' 'SUnreclaim: 283940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.078 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.079 node0=1024 expecting 1024 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.079 00:03:24.079 real 0m5.629s 00:03:24.079 user 0m2.221s 00:03:24.079 sys 0m3.522s 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.079 20:28:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.079 ************************************ 00:03:24.079 END TEST no_shrink_alloc 00:03:24.079 ************************************ 00:03:24.079 20:28:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:24.079 20:28:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:24.079 00:03:24.079 real 0m20.631s 00:03:24.079 user 0m7.847s 00:03:24.079 sys 0m12.128s 00:03:24.079 20:28:58 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.079 20:28:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.079 ************************************ 00:03:24.079 END TEST hugepages 00:03:24.079 ************************************ 00:03:24.079 20:28:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:24.079 20:28:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:24.079 20:28:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.079 20:28:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.079 20:28:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.079 ************************************ 00:03:24.079 START TEST driver 00:03:24.079 ************************************ 00:03:24.079 20:28:58 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:24.079 * Looking for test storage... 00:03:24.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.079 20:28:58 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:24.079 20:28:58 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.079 20:28:58 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.263 20:29:02 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:28.263 20:29:02 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.263 20:29:02 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.263 20:29:02 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:28.263 ************************************ 00:03:28.263 START TEST guess_driver 00:03:28.263 ************************************ 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:28.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:28.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:28.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:28.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:28.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:28.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:28.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:28.263 Looking for driver=vfio-pci 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.263 20:29:02 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.829 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.829 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.829 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.830 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:31.088 20:29:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.023 20:29:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.023 20:29:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.023 20:29:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.023 20:29:06 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:32.023 20:29:06 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:32.023 20:29:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.023 20:29:06 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.208 00:03:36.208 real 0m7.491s 00:03:36.208 user 0m2.097s 00:03:36.208 sys 0m3.827s 00:03:36.208 20:29:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.208 20:29:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.208 ************************************ 00:03:36.208 END TEST guess_driver 00:03:36.208 ************************************ 00:03:36.208 20:29:09 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:36.208 00:03:36.208 real 0m11.565s 00:03:36.208 user 0m3.276s 00:03:36.208 sys 0m5.965s 00:03:36.208 20:29:09 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.208 20:29:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.208 ************************************ 00:03:36.208 END TEST driver 00:03:36.208 ************************************ 00:03:36.208 20:29:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:36.208 20:29:10 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:36.208 20:29:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.208 20:29:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.208 20:29:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.208 ************************************ 00:03:36.208 START TEST devices 00:03:36.208 ************************************ 00:03:36.208 20:29:10 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:36.208 * Looking for test storage... 00:03:36.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.208 20:29:10 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:36.208 20:29:10 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:36.208 20:29:10 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.208 20:29:10 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:38.739 20:29:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:38.739 20:29:12 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:38.739 No valid GPT data, bailing 00:03:38.739 20:29:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.739 20:29:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:38.739 20:29:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:38.739 20:29:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:38.739 20:29:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:38.739 20:29:12 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:38.739 20:29:12 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.739 20:29:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:38.739 ************************************ 00:03:38.739 START TEST nvme_mount 00:03:38.739 ************************************ 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:38.739 20:29:12 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:39.303 Creating new GPT entries in memory. 00:03:39.303 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:39.303 other utilities. 00:03:39.303 20:29:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:39.303 20:29:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.303 20:29:13 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.303 20:29:13 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.303 20:29:13 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:40.680 Creating new GPT entries in memory. 00:03:40.680 The operation has completed successfully. 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2488991 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.680 20:29:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.209 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:43.210 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.210 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:43.210 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:43.210 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:43.210 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:43.210 20:29:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.468 20:29:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:45.999 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:46.256 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:46.256 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:46.256 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.256 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:46.256 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.256 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.256 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:46.256 20:29:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.257 20:29:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.257 20:29:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.819 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:48.820 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:48.820 00:03:48.820 real 0m10.528s 00:03:48.820 user 0m3.068s 00:03:48.820 sys 0m5.235s 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.820 20:29:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:48.820 ************************************ 00:03:48.820 END TEST nvme_mount 00:03:48.820 ************************************ 00:03:48.820 20:29:23 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:48.820 20:29:23 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:48.820 20:29:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.820 20:29:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.820 20:29:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.078 ************************************ 00:03:49.078 START TEST dm_mount 00:03:49.078 ************************************ 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.078 20:29:23 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:50.013 Creating new GPT entries in memory. 00:03:50.013 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.013 other utilities. 00:03:50.013 20:29:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.013 20:29:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.013 20:29:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.013 20:29:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.013 20:29:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:50.947 Creating new GPT entries in memory. 00:03:50.947 The operation has completed successfully. 00:03:50.947 20:29:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:50.947 20:29:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.947 20:29:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.947 20:29:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.947 20:29:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:52.321 The operation has completed successfully. 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2493168 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.321 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.322 20:29:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.843 20:29:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.368 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.369 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.627 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.627 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:57.628 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:57.628 00:03:57.628 real 0m8.663s 00:03:57.628 user 0m2.090s 00:03:57.628 sys 0m3.589s 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.628 20:29:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:57.628 ************************************ 00:03:57.628 END TEST dm_mount 00:03:57.628 ************************************ 00:03:57.628 20:29:32 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:57.628 20:29:32 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:57.628 20:29:32 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:57.628 20:29:32 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.628 20:29:32 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.628 20:29:32 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:57.628 20:29:32 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.628 20:29:32 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.886 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:57.886 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:57.886 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:57.886 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:57.886 20:29:32 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:57.886 20:29:32 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:57.886 20:29:32 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:57.886 20:29:32 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.886 20:29:32 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:57.886 20:29:32 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.886 20:29:32 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:57.886 00:03:57.886 real 0m22.252s 00:03:57.886 user 0m6.114s 00:03:57.886 sys 0m10.719s 00:03:57.886 20:29:32 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.886 20:29:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:57.886 ************************************ 00:03:57.886 END TEST devices 00:03:57.886 ************************************ 00:03:57.886 20:29:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:57.886 00:03:57.886 real 1m14.097s 00:03:57.886 user 0m23.861s 00:03:57.886 sys 0m40.475s 00:03:57.886 20:29:32 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.886 20:29:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.886 ************************************ 00:03:57.886 END TEST setup.sh 00:03:57.886 ************************************ 00:03:58.145 20:29:32 -- common/autotest_common.sh@1142 -- # return 0 00:03:58.145 20:29:32 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:00.674 Hugepages 00:04:00.674 node hugesize free / total 00:04:00.674 node0 1048576kB 0 / 0 00:04:00.674 node0 2048kB 2048 / 2048 00:04:00.674 node1 1048576kB 0 / 0 00:04:00.674 node1 2048kB 0 / 0 00:04:00.674 00:04:00.674 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.674 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:00.674 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:00.674 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:00.674 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:00.674 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:00.674 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:00.674 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:00.674 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:00.674 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:00.674 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:00.674 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:00.675 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:00.675 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:00.675 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:00.675 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:00.675 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:00.675 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:00.675 20:29:34 -- spdk/autotest.sh@130 -- # uname -s 00:04:00.675 20:29:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:00.675 20:29:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:00.675 20:29:34 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.205 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.205 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.769 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.026 20:29:38 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:04.960 20:29:39 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:04.960 20:29:39 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:04.960 20:29:39 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.960 20:29:39 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:04.960 20:29:39 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:04.960 20:29:39 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:04.960 20:29:39 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.960 20:29:39 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:04.960 20:29:39 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:04.960 20:29:39 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:04.960 20:29:39 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:05.217 20:29:39 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.740 Waiting for block devices as requested 00:04:07.740 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:07.740 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:07.740 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:07.999 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:07.999 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:07.999 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:07.999 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:08.257 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:08.257 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:08.257 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:08.515 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:08.515 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:08.515 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:08.515 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:08.773 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:08.773 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:08.773 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:09.030 20:29:43 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:09.030 20:29:43 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:09.030 20:29:43 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:09.030 20:29:43 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:09.030 20:29:43 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:09.030 20:29:43 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:09.030 20:29:43 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:09.030 20:29:43 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:09.030 20:29:43 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:09.030 20:29:43 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:09.030 20:29:43 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:09.030 20:29:43 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:09.030 20:29:43 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:09.030 20:29:43 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:09.030 20:29:43 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:09.030 20:29:43 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:09.030 20:29:43 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:09.030 20:29:43 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:09.030 20:29:43 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:09.030 20:29:43 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:09.030 20:29:43 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:09.030 20:29:43 -- common/autotest_common.sh@1557 -- # continue 00:04:09.030 20:29:43 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:09.030 20:29:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:09.030 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:04:09.030 20:29:43 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:09.030 20:29:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:09.030 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:04:09.030 20:29:43 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:11.612 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.612 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.869 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.869 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.802 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:12.802 20:29:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:12.802 20:29:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:12.802 20:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:12.802 20:29:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:12.802 20:29:47 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:12.802 20:29:47 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:12.803 20:29:47 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:12.803 20:29:47 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:12.803 20:29:47 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:12.803 20:29:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:12.803 20:29:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:12.803 20:29:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.803 20:29:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.803 20:29:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:12.803 20:29:47 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:12.803 20:29:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:12.803 20:29:47 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:12.803 20:29:47 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:12.803 20:29:47 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:12.803 20:29:47 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:12.803 20:29:47 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:12.803 20:29:47 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:12.803 20:29:47 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:12.803 20:29:47 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2501949 00:04:12.803 20:29:47 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.803 20:29:47 -- common/autotest_common.sh@1598 -- # waitforlisten 2501949 00:04:12.803 20:29:47 -- common/autotest_common.sh@829 -- # '[' -z 2501949 ']' 00:04:12.803 20:29:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.803 20:29:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:12.803 20:29:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.803 20:29:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:12.803 20:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:12.803 [2024-07-15 20:29:47.211366] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:12.803 [2024-07-15 20:29:47.211415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501949 ] 00:04:12.803 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.803 [2024-07-15 20:29:47.266448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.060 [2024-07-15 20:29:47.341458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.624 20:29:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:13.624 20:29:48 -- common/autotest_common.sh@862 -- # return 0 00:04:13.624 20:29:48 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:13.624 20:29:48 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:13.624 20:29:48 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:16.904 nvme0n1 00:04:16.904 20:29:50 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:16.904 [2024-07-15 20:29:51.160213] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:16.904 request: 00:04:16.904 { 00:04:16.904 "nvme_ctrlr_name": "nvme0", 00:04:16.904 "password": "test", 00:04:16.904 "method": "bdev_nvme_opal_revert", 00:04:16.905 "req_id": 1 00:04:16.905 } 00:04:16.905 Got JSON-RPC error response 00:04:16.905 response: 00:04:16.905 { 00:04:16.905 "code": -32602, 00:04:16.905 "message": "Invalid parameters" 00:04:16.905 } 00:04:16.905 20:29:51 -- common/autotest_common.sh@1604 -- # true 00:04:16.905 20:29:51 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:16.905 20:29:51 -- common/autotest_common.sh@1608 -- # killprocess 2501949 00:04:16.905 20:29:51 -- common/autotest_common.sh@948 -- # '[' -z 2501949 ']' 00:04:16.905 20:29:51 -- common/autotest_common.sh@952 -- # kill -0 2501949 00:04:16.905 20:29:51 -- common/autotest_common.sh@953 -- # uname 00:04:16.905 20:29:51 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.905 20:29:51 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2501949 00:04:16.905 20:29:51 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:16.905 20:29:51 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:16.905 20:29:51 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2501949' 00:04:16.905 killing process with pid 2501949 00:04:16.905 20:29:51 -- common/autotest_common.sh@967 -- # kill 2501949 00:04:16.905 20:29:51 -- common/autotest_common.sh@972 -- # wait 2501949 00:04:18.806 20:29:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:18.807 20:29:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:18.807 20:29:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.807 20:29:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.807 20:29:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:18.807 20:29:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.807 20:29:52 -- common/autotest_common.sh@10 -- # set +x 00:04:18.807 20:29:52 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:18.807 20:29:52 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:18.807 20:29:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.807 20:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.807 20:29:52 -- common/autotest_common.sh@10 -- # set +x 00:04:18.807 ************************************ 00:04:18.807 START TEST env 00:04:18.807 ************************************ 00:04:18.807 20:29:52 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:18.807 * Looking for test storage... 00:04:18.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:18.807 20:29:52 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:18.807 20:29:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.807 20:29:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.807 20:29:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.807 ************************************ 00:04:18.807 START TEST env_memory 00:04:18.807 ************************************ 00:04:18.807 20:29:52 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:18.807 00:04:18.807 00:04:18.807 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.807 http://cunit.sourceforge.net/ 00:04:18.807 00:04:18.807 00:04:18.807 Suite: memory 00:04:18.807 Test: alloc and free memory map ...[2024-07-15 20:29:53.035069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.807 passed 00:04:18.807 Test: mem map translation ...[2024-07-15 20:29:53.053718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.807 [2024-07-15 20:29:53.053735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.807 [2024-07-15 20:29:53.053769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.807 [2024-07-15 20:29:53.053780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.807 passed 00:04:18.807 Test: mem map registration ...[2024-07-15 20:29:53.091584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:18.807 [2024-07-15 20:29:53.091604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:18.807 passed 00:04:18.807 Test: mem map adjacent registrations ...passed 00:04:18.807 00:04:18.807 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.807 suites 1 1 n/a 0 0 00:04:18.807 tests 4 4 4 0 0 00:04:18.807 asserts 152 152 152 0 n/a 00:04:18.807 00:04:18.807 Elapsed time = 0.135 seconds 00:04:18.807 00:04:18.807 real 0m0.147s 00:04:18.807 user 0m0.137s 00:04:18.807 sys 0m0.009s 00:04:18.807 20:29:53 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.807 20:29:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:18.807 ************************************ 00:04:18.807 END TEST env_memory 00:04:18.807 ************************************ 00:04:18.807 20:29:53 env -- common/autotest_common.sh@1142 -- # return 0 00:04:18.807 20:29:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:18.807 20:29:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.807 20:29:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.807 20:29:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.807 ************************************ 00:04:18.807 START TEST env_vtophys 00:04:18.807 ************************************ 00:04:18.807 20:29:53 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:18.807 EAL: lib.eal log level changed from notice to debug 00:04:18.807 EAL: Detected lcore 0 as core 0 on socket 0 00:04:18.807 EAL: Detected lcore 1 as core 1 on socket 0 00:04:18.807 EAL: Detected lcore 2 as core 2 on socket 0 00:04:18.807 EAL: Detected lcore 3 as core 3 on socket 0 00:04:18.807 EAL: Detected lcore 4 as core 4 on socket 0 00:04:18.807 EAL: Detected lcore 5 as core 5 on socket 0 00:04:18.807 EAL: Detected lcore 6 as core 6 on socket 0 00:04:18.807 EAL: Detected lcore 7 as core 8 on socket 0 00:04:18.807 EAL: Detected lcore 8 as core 9 on socket 0 00:04:18.807 EAL: Detected lcore 9 as core 10 on socket 0 00:04:18.807 EAL: Detected lcore 10 as core 11 on socket 0 00:04:18.807 EAL: Detected lcore 11 as core 12 on socket 0 00:04:18.807 EAL: Detected lcore 12 as core 13 on socket 0 00:04:18.807 EAL: Detected lcore 13 as core 16 on socket 0 00:04:18.807 EAL: Detected lcore 14 as core 17 on socket 0 00:04:18.807 EAL: Detected lcore 15 as core 18 on socket 0 00:04:18.807 EAL: Detected lcore 16 as core 19 on socket 0 00:04:18.807 EAL: Detected lcore 17 as core 20 on socket 0 00:04:18.807 EAL: Detected lcore 18 as core 21 on socket 0 00:04:18.807 EAL: Detected lcore 19 as core 25 on socket 0 00:04:18.807 EAL: Detected lcore 20 as core 26 on socket 0 00:04:18.807 EAL: Detected lcore 21 as core 27 on socket 0 00:04:18.807 EAL: Detected lcore 22 as core 28 on socket 0 00:04:18.807 EAL: Detected lcore 23 as core 29 on socket 0 00:04:18.807 EAL: Detected lcore 24 as core 0 on socket 1 00:04:18.807 EAL: Detected lcore 25 as core 1 on socket 1 00:04:18.807 EAL: Detected lcore 26 as core 2 on socket 1 00:04:18.807 EAL: Detected lcore 27 as core 3 on socket 1 00:04:18.807 EAL: Detected lcore 28 as core 4 on socket 1 00:04:18.807 EAL: Detected lcore 29 as core 5 on socket 1 00:04:18.807 EAL: Detected lcore 30 as core 6 on socket 1 00:04:18.807 EAL: Detected lcore 31 as core 9 on socket 1 00:04:18.807 EAL: Detected lcore 32 as core 10 on socket 1 00:04:18.807 EAL: Detected lcore 33 as core 11 on socket 1 00:04:18.807 EAL: Detected lcore 34 as core 12 on socket 1 00:04:18.807 EAL: Detected lcore 35 as core 13 on socket 1 00:04:18.807 EAL: Detected lcore 36 as core 16 on socket 1 00:04:18.807 EAL: Detected lcore 37 as core 17 on socket 1 00:04:18.807 EAL: Detected lcore 38 as core 18 on socket 1 00:04:18.807 EAL: Detected lcore 39 as core 19 on socket 1 00:04:18.807 EAL: Detected lcore 40 as core 20 on socket 1 00:04:18.807 EAL: Detected lcore 41 as core 21 on socket 1 00:04:18.807 EAL: Detected lcore 42 as core 24 on socket 1 00:04:18.807 EAL: Detected lcore 43 as core 25 on socket 1 00:04:18.807 EAL: Detected lcore 44 as core 26 on socket 1 00:04:18.807 EAL: Detected lcore 45 as core 27 on socket 1 00:04:18.807 EAL: Detected lcore 46 as core 28 on socket 1 00:04:18.807 EAL: Detected lcore 47 as core 29 on socket 1 00:04:18.807 EAL: Detected lcore 48 as core 0 on socket 0 00:04:18.807 EAL: Detected lcore 49 as core 1 on socket 0 00:04:18.807 EAL: Detected lcore 50 as core 2 on socket 0 00:04:18.807 EAL: Detected lcore 51 as core 3 on socket 0 00:04:18.807 EAL: Detected lcore 52 as core 4 on socket 0 00:04:18.807 EAL: Detected lcore 53 as core 5 on socket 0 00:04:18.807 EAL: Detected lcore 54 as core 6 on socket 0 00:04:18.807 EAL: Detected lcore 55 as core 8 on socket 0 00:04:18.807 EAL: Detected lcore 56 as core 9 on socket 0 00:04:18.807 EAL: Detected lcore 57 as core 10 on socket 0 00:04:18.807 EAL: Detected lcore 58 as core 11 on socket 0 00:04:18.807 EAL: Detected lcore 59 as core 12 on socket 0 00:04:18.807 EAL: Detected lcore 60 as core 13 on socket 0 00:04:18.807 EAL: Detected lcore 61 as core 16 on socket 0 00:04:18.807 EAL: Detected lcore 62 as core 17 on socket 0 00:04:18.807 EAL: Detected lcore 63 as core 18 on socket 0 00:04:18.807 EAL: Detected lcore 64 as core 19 on socket 0 00:04:18.807 EAL: Detected lcore 65 as core 20 on socket 0 00:04:18.807 EAL: Detected lcore 66 as core 21 on socket 0 00:04:18.807 EAL: Detected lcore 67 as core 25 on socket 0 00:04:18.807 EAL: Detected lcore 68 as core 26 on socket 0 00:04:18.807 EAL: Detected lcore 69 as core 27 on socket 0 00:04:18.807 EAL: Detected lcore 70 as core 28 on socket 0 00:04:18.807 EAL: Detected lcore 71 as core 29 on socket 0 00:04:18.807 EAL: Detected lcore 72 as core 0 on socket 1 00:04:18.807 EAL: Detected lcore 73 as core 1 on socket 1 00:04:18.807 EAL: Detected lcore 74 as core 2 on socket 1 00:04:18.807 EAL: Detected lcore 75 as core 3 on socket 1 00:04:18.807 EAL: Detected lcore 76 as core 4 on socket 1 00:04:18.807 EAL: Detected lcore 77 as core 5 on socket 1 00:04:18.807 EAL: Detected lcore 78 as core 6 on socket 1 00:04:18.807 EAL: Detected lcore 79 as core 9 on socket 1 00:04:18.807 EAL: Detected lcore 80 as core 10 on socket 1 00:04:18.807 EAL: Detected lcore 81 as core 11 on socket 1 00:04:18.807 EAL: Detected lcore 82 as core 12 on socket 1 00:04:18.807 EAL: Detected lcore 83 as core 13 on socket 1 00:04:18.807 EAL: Detected lcore 84 as core 16 on socket 1 00:04:18.807 EAL: Detected lcore 85 as core 17 on socket 1 00:04:18.807 EAL: Detected lcore 86 as core 18 on socket 1 00:04:18.807 EAL: Detected lcore 87 as core 19 on socket 1 00:04:18.807 EAL: Detected lcore 88 as core 20 on socket 1 00:04:18.807 EAL: Detected lcore 89 as core 21 on socket 1 00:04:18.807 EAL: Detected lcore 90 as core 24 on socket 1 00:04:18.807 EAL: Detected lcore 91 as core 25 on socket 1 00:04:18.807 EAL: Detected lcore 92 as core 26 on socket 1 00:04:18.807 EAL: Detected lcore 93 as core 27 on socket 1 00:04:18.807 EAL: Detected lcore 94 as core 28 on socket 1 00:04:18.807 EAL: Detected lcore 95 as core 29 on socket 1 00:04:18.807 EAL: Maximum logical cores by configuration: 128 00:04:18.807 EAL: Detected CPU lcores: 96 00:04:18.807 EAL: Detected NUMA nodes: 2 00:04:18.807 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:18.807 EAL: Detected shared linkage of DPDK 00:04:18.807 EAL: No shared files mode enabled, IPC will be disabled 00:04:18.807 EAL: Bus pci wants IOVA as 'DC' 00:04:18.807 EAL: Buses did not request a specific IOVA mode. 00:04:18.807 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:18.807 EAL: Selected IOVA mode 'VA' 00:04:18.807 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.807 EAL: Probing VFIO support... 00:04:18.807 EAL: IOMMU type 1 (Type 1) is supported 00:04:18.807 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:18.807 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:18.807 EAL: VFIO support initialized 00:04:18.807 EAL: Ask a virtual area of 0x2e000 bytes 00:04:18.807 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:18.807 EAL: Setting up physically contiguous memory... 00:04:18.807 EAL: Setting maximum number of open files to 524288 00:04:18.807 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:18.808 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:18.808 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:18.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.808 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:18.808 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.808 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:18.808 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:18.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.808 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:18.808 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.808 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:18.808 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:18.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.808 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:18.808 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.808 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:18.808 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:18.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.808 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:18.808 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.808 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:18.808 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:18.808 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:18.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.808 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:18.808 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.808 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:18.808 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:18.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.808 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:18.808 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.808 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:18.808 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:18.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.808 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:18.808 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.808 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:18.808 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:18.808 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.808 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:18.808 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.808 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.808 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:18.808 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:18.808 EAL: Hugepages will be freed exactly as allocated. 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: TSC frequency is ~2300000 KHz 00:04:18.808 EAL: Main lcore 0 is ready (tid=7f7906ae2a00;cpuset=[0]) 00:04:18.808 EAL: Trying to obtain current memory policy. 00:04:18.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.808 EAL: Restoring previous memory policy: 0 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was expanded by 2MB 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:18.808 EAL: Mem event callback 'spdk:(nil)' registered 00:04:18.808 00:04:18.808 00:04:18.808 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.808 http://cunit.sourceforge.net/ 00:04:18.808 00:04:18.808 00:04:18.808 Suite: components_suite 00:04:18.808 Test: vtophys_malloc_test ...passed 00:04:18.808 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:18.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.808 EAL: Restoring previous memory policy: 4 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was expanded by 4MB 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was shrunk by 4MB 00:04:18.808 EAL: Trying to obtain current memory policy. 00:04:18.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.808 EAL: Restoring previous memory policy: 4 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was expanded by 6MB 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was shrunk by 6MB 00:04:18.808 EAL: Trying to obtain current memory policy. 00:04:18.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.808 EAL: Restoring previous memory policy: 4 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was expanded by 10MB 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was shrunk by 10MB 00:04:18.808 EAL: Trying to obtain current memory policy. 00:04:18.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.808 EAL: Restoring previous memory policy: 4 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was expanded by 18MB 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was shrunk by 18MB 00:04:18.808 EAL: Trying to obtain current memory policy. 00:04:18.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.808 EAL: Restoring previous memory policy: 4 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.808 EAL: request: mp_malloc_sync 00:04:18.808 EAL: No shared files mode enabled, IPC is disabled 00:04:18.808 EAL: Heap on socket 0 was expanded by 34MB 00:04:18.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.067 EAL: request: mp_malloc_sync 00:04:19.067 EAL: No shared files mode enabled, IPC is disabled 00:04:19.067 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.067 EAL: Trying to obtain current memory policy. 00:04:19.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.067 EAL: Restoring previous memory policy: 4 00:04:19.067 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.067 EAL: request: mp_malloc_sync 00:04:19.067 EAL: No shared files mode enabled, IPC is disabled 00:04:19.067 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.067 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.067 EAL: request: mp_malloc_sync 00:04:19.067 EAL: No shared files mode enabled, IPC is disabled 00:04:19.067 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.067 EAL: Trying to obtain current memory policy. 00:04:19.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.067 EAL: Restoring previous memory policy: 4 00:04:19.067 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.067 EAL: request: mp_malloc_sync 00:04:19.067 EAL: No shared files mode enabled, IPC is disabled 00:04:19.067 EAL: Heap on socket 0 was expanded by 130MB 00:04:19.067 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.067 EAL: request: mp_malloc_sync 00:04:19.067 EAL: No shared files mode enabled, IPC is disabled 00:04:19.067 EAL: Heap on socket 0 was shrunk by 130MB 00:04:19.067 EAL: Trying to obtain current memory policy. 00:04:19.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.067 EAL: Restoring previous memory policy: 4 00:04:19.067 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.067 EAL: request: mp_malloc_sync 00:04:19.067 EAL: No shared files mode enabled, IPC is disabled 00:04:19.067 EAL: Heap on socket 0 was expanded by 258MB 00:04:19.067 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.067 EAL: request: mp_malloc_sync 00:04:19.067 EAL: No shared files mode enabled, IPC is disabled 00:04:19.067 EAL: Heap on socket 0 was shrunk by 258MB 00:04:19.067 EAL: Trying to obtain current memory policy. 00:04:19.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.325 EAL: Restoring previous memory policy: 4 00:04:19.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.325 EAL: request: mp_malloc_sync 00:04:19.325 EAL: No shared files mode enabled, IPC is disabled 00:04:19.325 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.325 EAL: request: mp_malloc_sync 00:04:19.325 EAL: No shared files mode enabled, IPC is disabled 00:04:19.325 EAL: Heap on socket 0 was shrunk by 514MB 00:04:19.325 EAL: Trying to obtain current memory policy. 00:04:19.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.584 EAL: Restoring previous memory policy: 4 00:04:19.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.584 EAL: request: mp_malloc_sync 00:04:19.584 EAL: No shared files mode enabled, IPC is disabled 00:04:19.584 EAL: Heap on socket 0 was expanded by 1026MB 00:04:19.843 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.843 EAL: request: mp_malloc_sync 00:04:19.843 EAL: No shared files mode enabled, IPC is disabled 00:04:19.843 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.843 passed 00:04:19.843 00:04:19.843 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.843 suites 1 1 n/a 0 0 00:04:19.843 tests 2 2 2 0 0 00:04:19.843 asserts 497 497 497 0 n/a 00:04:19.843 00:04:19.843 Elapsed time = 0.963 seconds 00:04:19.843 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.843 EAL: request: mp_malloc_sync 00:04:19.843 EAL: No shared files mode enabled, IPC is disabled 00:04:19.843 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.843 EAL: No shared files mode enabled, IPC is disabled 00:04:19.843 EAL: No shared files mode enabled, IPC is disabled 00:04:19.843 EAL: No shared files mode enabled, IPC is disabled 00:04:19.843 00:04:19.843 real 0m1.072s 00:04:19.843 user 0m0.637s 00:04:19.843 sys 0m0.406s 00:04:19.843 20:29:54 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.843 20:29:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.843 ************************************ 00:04:19.843 END TEST env_vtophys 00:04:19.843 ************************************ 00:04:19.843 20:29:54 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.843 20:29:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:19.843 20:29:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.843 20:29:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.843 20:29:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.102 ************************************ 00:04:20.102 START TEST env_pci 00:04:20.102 ************************************ 00:04:20.102 20:29:54 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:20.102 00:04:20.102 00:04:20.102 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.102 http://cunit.sourceforge.net/ 00:04:20.102 00:04:20.102 00:04:20.102 Suite: pci 00:04:20.102 Test: pci_hook ...[2024-07-15 20:29:54.338476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2503265 has claimed it 00:04:20.102 EAL: Cannot find device (10000:00:01.0) 00:04:20.102 EAL: Failed to attach device on primary process 00:04:20.102 passed 00:04:20.102 00:04:20.102 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.102 suites 1 1 n/a 0 0 00:04:20.102 tests 1 1 1 0 0 00:04:20.102 asserts 25 25 25 0 n/a 00:04:20.102 00:04:20.102 Elapsed time = 0.017 seconds 00:04:20.102 00:04:20.102 real 0m0.027s 00:04:20.102 user 0m0.010s 00:04:20.102 sys 0m0.016s 00:04:20.102 20:29:54 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.102 20:29:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:20.102 ************************************ 00:04:20.102 END TEST env_pci 00:04:20.102 ************************************ 00:04:20.102 20:29:54 env -- common/autotest_common.sh@1142 -- # return 0 00:04:20.102 20:29:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:20.102 20:29:54 env -- env/env.sh@15 -- # uname 00:04:20.102 20:29:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:20.102 20:29:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:20.102 20:29:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.102 20:29:54 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:20.102 20:29:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.102 20:29:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.102 ************************************ 00:04:20.102 START TEST env_dpdk_post_init 00:04:20.102 ************************************ 00:04:20.102 20:29:54 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.102 EAL: Detected CPU lcores: 96 00:04:20.102 EAL: Detected NUMA nodes: 2 00:04:20.102 EAL: Detected shared linkage of DPDK 00:04:20.102 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.102 EAL: Selected IOVA mode 'VA' 00:04:20.102 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.102 EAL: VFIO support initialized 00:04:20.102 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.102 EAL: Using IOMMU type 1 (Type 1) 00:04:20.102 EAL: Ignore mapping IO port bar(1) 00:04:20.102 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:20.102 EAL: Ignore mapping IO port bar(1) 00:04:20.102 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:20.102 EAL: Ignore mapping IO port bar(1) 00:04:20.102 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:20.102 EAL: Ignore mapping IO port bar(1) 00:04:20.102 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:20.361 EAL: Ignore mapping IO port bar(1) 00:04:20.361 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:20.361 EAL: Ignore mapping IO port bar(1) 00:04:20.361 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:20.361 EAL: Ignore mapping IO port bar(1) 00:04:20.361 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:20.361 EAL: Ignore mapping IO port bar(1) 00:04:20.361 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:20.929 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:20.929 EAL: Ignore mapping IO port bar(1) 00:04:20.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:20.929 EAL: Ignore mapping IO port bar(1) 00:04:20.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:20.929 EAL: Ignore mapping IO port bar(1) 00:04:20.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:20.929 EAL: Ignore mapping IO port bar(1) 00:04:20.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:21.187 EAL: Ignore mapping IO port bar(1) 00:04:21.187 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:21.187 EAL: Ignore mapping IO port bar(1) 00:04:21.187 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:21.187 EAL: Ignore mapping IO port bar(1) 00:04:21.187 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:21.187 EAL: Ignore mapping IO port bar(1) 00:04:21.187 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:24.466 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:24.466 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:24.466 Starting DPDK initialization... 00:04:24.466 Starting SPDK post initialization... 00:04:24.466 SPDK NVMe probe 00:04:24.466 Attaching to 0000:5e:00.0 00:04:24.466 Attached to 0000:5e:00.0 00:04:24.466 Cleaning up... 00:04:24.466 00:04:24.466 real 0m4.354s 00:04:24.466 user 0m3.303s 00:04:24.466 sys 0m0.127s 00:04:24.466 20:29:58 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.466 20:29:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.466 ************************************ 00:04:24.466 END TEST env_dpdk_post_init 00:04:24.466 ************************************ 00:04:24.466 20:29:58 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.466 20:29:58 env -- env/env.sh@26 -- # uname 00:04:24.466 20:29:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.466 20:29:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.466 20:29:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.466 20:29:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.466 20:29:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.466 ************************************ 00:04:24.466 START TEST env_mem_callbacks 00:04:24.466 ************************************ 00:04:24.466 20:29:58 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.466 EAL: Detected CPU lcores: 96 00:04:24.466 EAL: Detected NUMA nodes: 2 00:04:24.466 EAL: Detected shared linkage of DPDK 00:04:24.466 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.466 EAL: Selected IOVA mode 'VA' 00:04:24.466 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.466 EAL: VFIO support initialized 00:04:24.466 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.466 00:04:24.466 00:04:24.466 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.466 http://cunit.sourceforge.net/ 00:04:24.466 00:04:24.466 00:04:24.466 Suite: memory 00:04:24.466 Test: test ... 00:04:24.466 register 0x200000200000 2097152 00:04:24.466 malloc 3145728 00:04:24.466 register 0x200000400000 4194304 00:04:24.466 buf 0x200000500000 len 3145728 PASSED 00:04:24.466 malloc 64 00:04:24.466 buf 0x2000004fff40 len 64 PASSED 00:04:24.466 malloc 4194304 00:04:24.466 register 0x200000800000 6291456 00:04:24.466 buf 0x200000a00000 len 4194304 PASSED 00:04:24.466 free 0x200000500000 3145728 00:04:24.466 free 0x2000004fff40 64 00:04:24.466 unregister 0x200000400000 4194304 PASSED 00:04:24.466 free 0x200000a00000 4194304 00:04:24.466 unregister 0x200000800000 6291456 PASSED 00:04:24.466 malloc 8388608 00:04:24.466 register 0x200000400000 10485760 00:04:24.466 buf 0x200000600000 len 8388608 PASSED 00:04:24.466 free 0x200000600000 8388608 00:04:24.466 unregister 0x200000400000 10485760 PASSED 00:04:24.466 passed 00:04:24.466 00:04:24.466 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.466 suites 1 1 n/a 0 0 00:04:24.466 tests 1 1 1 0 0 00:04:24.466 asserts 15 15 15 0 n/a 00:04:24.466 00:04:24.466 Elapsed time = 0.005 seconds 00:04:24.466 00:04:24.466 real 0m0.051s 00:04:24.466 user 0m0.021s 00:04:24.466 sys 0m0.030s 00:04:24.467 20:29:58 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.467 20:29:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:24.467 ************************************ 00:04:24.467 END TEST env_mem_callbacks 00:04:24.467 ************************************ 00:04:24.467 20:29:58 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.467 00:04:24.467 real 0m6.045s 00:04:24.467 user 0m4.254s 00:04:24.467 sys 0m0.866s 00:04:24.467 20:29:58 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.467 20:29:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.467 ************************************ 00:04:24.467 END TEST env 00:04:24.467 ************************************ 00:04:24.725 20:29:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.725 20:29:58 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.725 20:29:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.725 20:29:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.725 20:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:24.725 ************************************ 00:04:24.725 START TEST rpc 00:04:24.725 ************************************ 00:04:24.725 20:29:58 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.725 * Looking for test storage... 00:04:24.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:24.725 20:29:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2504086 00:04:24.725 20:29:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.725 20:29:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2504086 00:04:24.725 20:29:59 rpc -- common/autotest_common.sh@829 -- # '[' -z 2504086 ']' 00:04:24.725 20:29:59 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.725 20:29:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:24.725 20:29:59 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.725 20:29:59 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.725 20:29:59 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.725 20:29:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.725 [2024-07-15 20:29:59.112483] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:24.725 [2024-07-15 20:29:59.112542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504086 ] 00:04:24.725 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.725 [2024-07-15 20:29:59.165113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.984 [2024-07-15 20:29:59.246432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:24.984 [2024-07-15 20:29:59.246466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2504086' to capture a snapshot of events at runtime. 00:04:24.984 [2024-07-15 20:29:59.246473] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:24.984 [2024-07-15 20:29:59.246479] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:24.984 [2024-07-15 20:29:59.246484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2504086 for offline analysis/debug. 00:04:24.984 [2024-07-15 20:29:59.246503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.550 20:29:59 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.550 20:29:59 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:25.550 20:29:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.550 20:29:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.550 20:29:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:25.550 20:29:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:25.550 20:29:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.550 20:29:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.550 20:29:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.550 ************************************ 00:04:25.550 START TEST rpc_integrity 00:04:25.550 ************************************ 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.550 20:29:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.550 { 00:04:25.550 "name": "Malloc0", 00:04:25.550 "aliases": [ 00:04:25.550 "395f4680-0a68-4017-a7ce-62a4deabd1ab" 00:04:25.550 ], 00:04:25.550 "product_name": "Malloc disk", 00:04:25.550 "block_size": 512, 00:04:25.550 "num_blocks": 16384, 00:04:25.550 "uuid": "395f4680-0a68-4017-a7ce-62a4deabd1ab", 00:04:25.550 "assigned_rate_limits": { 00:04:25.550 "rw_ios_per_sec": 0, 00:04:25.550 "rw_mbytes_per_sec": 0, 00:04:25.550 "r_mbytes_per_sec": 0, 00:04:25.550 "w_mbytes_per_sec": 0 00:04:25.550 }, 00:04:25.550 "claimed": false, 00:04:25.550 "zoned": false, 00:04:25.550 "supported_io_types": { 00:04:25.550 "read": true, 00:04:25.550 "write": true, 00:04:25.550 "unmap": true, 00:04:25.550 "flush": true, 00:04:25.550 "reset": true, 00:04:25.550 "nvme_admin": false, 00:04:25.550 "nvme_io": false, 00:04:25.550 "nvme_io_md": false, 00:04:25.550 "write_zeroes": true, 00:04:25.550 "zcopy": true, 00:04:25.550 "get_zone_info": false, 00:04:25.550 "zone_management": false, 00:04:25.550 "zone_append": false, 00:04:25.550 "compare": false, 00:04:25.550 "compare_and_write": false, 00:04:25.550 "abort": true, 00:04:25.550 "seek_hole": false, 00:04:25.550 "seek_data": false, 00:04:25.550 "copy": true, 00:04:25.550 "nvme_iov_md": false 00:04:25.550 }, 00:04:25.550 "memory_domains": [ 00:04:25.550 { 00:04:25.550 "dma_device_id": "system", 00:04:25.550 "dma_device_type": 1 00:04:25.550 }, 00:04:25.550 { 00:04:25.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.550 "dma_device_type": 2 00:04:25.550 } 00:04:25.550 ], 00:04:25.550 "driver_specific": {} 00:04:25.550 } 00:04:25.550 ]' 00:04:25.550 20:29:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 [2024-07-15 20:30:00.040445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:25.809 [2024-07-15 20:30:00.040477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.809 [2024-07-15 20:30:00.040488] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb442d0 00:04:25.809 [2024-07-15 20:30:00.040495] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.809 [2024-07-15 20:30:00.041658] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.809 [2024-07-15 20:30:00.041680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.809 Passthru0 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.809 { 00:04:25.809 "name": "Malloc0", 00:04:25.809 "aliases": [ 00:04:25.809 "395f4680-0a68-4017-a7ce-62a4deabd1ab" 00:04:25.809 ], 00:04:25.809 "product_name": "Malloc disk", 00:04:25.809 "block_size": 512, 00:04:25.809 "num_blocks": 16384, 00:04:25.809 "uuid": "395f4680-0a68-4017-a7ce-62a4deabd1ab", 00:04:25.809 "assigned_rate_limits": { 00:04:25.809 "rw_ios_per_sec": 0, 00:04:25.809 "rw_mbytes_per_sec": 0, 00:04:25.809 "r_mbytes_per_sec": 0, 00:04:25.809 "w_mbytes_per_sec": 0 00:04:25.809 }, 00:04:25.809 "claimed": true, 00:04:25.809 "claim_type": "exclusive_write", 00:04:25.809 "zoned": false, 00:04:25.809 "supported_io_types": { 00:04:25.809 "read": true, 00:04:25.809 "write": true, 00:04:25.809 "unmap": true, 00:04:25.809 "flush": true, 00:04:25.809 "reset": true, 00:04:25.809 "nvme_admin": false, 00:04:25.809 "nvme_io": false, 00:04:25.809 "nvme_io_md": false, 00:04:25.809 "write_zeroes": true, 00:04:25.809 "zcopy": true, 00:04:25.809 "get_zone_info": false, 00:04:25.809 "zone_management": false, 00:04:25.809 "zone_append": false, 00:04:25.809 "compare": false, 00:04:25.809 "compare_and_write": false, 00:04:25.809 "abort": true, 00:04:25.809 "seek_hole": false, 00:04:25.809 "seek_data": false, 00:04:25.809 "copy": true, 00:04:25.809 "nvme_iov_md": false 00:04:25.809 }, 00:04:25.809 "memory_domains": [ 00:04:25.809 { 00:04:25.809 "dma_device_id": "system", 00:04:25.809 "dma_device_type": 1 00:04:25.809 }, 00:04:25.809 { 00:04:25.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.809 "dma_device_type": 2 00:04:25.809 } 00:04:25.809 ], 00:04:25.809 "driver_specific": {} 00:04:25.809 }, 00:04:25.809 { 00:04:25.809 "name": "Passthru0", 00:04:25.809 "aliases": [ 00:04:25.809 "6965ddb8-01fa-52f7-a754-f52deb22d01c" 00:04:25.809 ], 00:04:25.809 "product_name": "passthru", 00:04:25.809 "block_size": 512, 00:04:25.809 "num_blocks": 16384, 00:04:25.809 "uuid": "6965ddb8-01fa-52f7-a754-f52deb22d01c", 00:04:25.809 "assigned_rate_limits": { 00:04:25.809 "rw_ios_per_sec": 0, 00:04:25.809 "rw_mbytes_per_sec": 0, 00:04:25.809 "r_mbytes_per_sec": 0, 00:04:25.809 "w_mbytes_per_sec": 0 00:04:25.809 }, 00:04:25.809 "claimed": false, 00:04:25.809 "zoned": false, 00:04:25.809 "supported_io_types": { 00:04:25.809 "read": true, 00:04:25.809 "write": true, 00:04:25.809 "unmap": true, 00:04:25.809 "flush": true, 00:04:25.809 "reset": true, 00:04:25.809 "nvme_admin": false, 00:04:25.809 "nvme_io": false, 00:04:25.809 "nvme_io_md": false, 00:04:25.809 "write_zeroes": true, 00:04:25.809 "zcopy": true, 00:04:25.809 "get_zone_info": false, 00:04:25.809 "zone_management": false, 00:04:25.809 "zone_append": false, 00:04:25.809 "compare": false, 00:04:25.809 "compare_and_write": false, 00:04:25.809 "abort": true, 00:04:25.809 "seek_hole": false, 00:04:25.809 "seek_data": false, 00:04:25.809 "copy": true, 00:04:25.809 "nvme_iov_md": false 00:04:25.809 }, 00:04:25.809 "memory_domains": [ 00:04:25.809 { 00:04:25.809 "dma_device_id": "system", 00:04:25.809 "dma_device_type": 1 00:04:25.809 }, 00:04:25.809 { 00:04:25.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.809 "dma_device_type": 2 00:04:25.809 } 00:04:25.809 ], 00:04:25.809 "driver_specific": { 00:04:25.809 "passthru": { 00:04:25.809 "name": "Passthru0", 00:04:25.809 "base_bdev_name": "Malloc0" 00:04:25.809 } 00:04:25.809 } 00:04:25.809 } 00:04:25.809 ]' 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.809 20:30:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.809 00:04:25.809 real 0m0.237s 00:04:25.809 user 0m0.138s 00:04:25.809 sys 0m0.032s 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.809 20:30:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 ************************************ 00:04:25.809 END TEST rpc_integrity 00:04:25.809 ************************************ 00:04:25.809 20:30:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.809 20:30:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:25.809 20:30:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.809 20:30:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.809 20:30:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 ************************************ 00:04:25.809 START TEST rpc_plugins 00:04:25.809 ************************************ 00:04:25.809 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:25.809 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:25.809 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.809 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.809 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:25.809 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:25.809 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.809 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.809 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:25.809 { 00:04:25.809 "name": "Malloc1", 00:04:25.809 "aliases": [ 00:04:25.810 "443c48fe-2422-4e2b-b3d6-264becd52fb3" 00:04:25.810 ], 00:04:25.810 "product_name": "Malloc disk", 00:04:25.810 "block_size": 4096, 00:04:25.810 "num_blocks": 256, 00:04:25.810 "uuid": "443c48fe-2422-4e2b-b3d6-264becd52fb3", 00:04:25.810 "assigned_rate_limits": { 00:04:25.810 "rw_ios_per_sec": 0, 00:04:25.810 "rw_mbytes_per_sec": 0, 00:04:25.810 "r_mbytes_per_sec": 0, 00:04:25.810 "w_mbytes_per_sec": 0 00:04:25.810 }, 00:04:25.810 "claimed": false, 00:04:25.810 "zoned": false, 00:04:25.810 "supported_io_types": { 00:04:25.810 "read": true, 00:04:25.810 "write": true, 00:04:25.810 "unmap": true, 00:04:25.810 "flush": true, 00:04:25.810 "reset": true, 00:04:25.810 "nvme_admin": false, 00:04:25.810 "nvme_io": false, 00:04:25.810 "nvme_io_md": false, 00:04:25.810 "write_zeroes": true, 00:04:25.810 "zcopy": true, 00:04:25.810 "get_zone_info": false, 00:04:25.810 "zone_management": false, 00:04:25.810 "zone_append": false, 00:04:25.810 "compare": false, 00:04:25.810 "compare_and_write": false, 00:04:25.810 "abort": true, 00:04:25.810 "seek_hole": false, 00:04:25.810 "seek_data": false, 00:04:25.810 "copy": true, 00:04:25.810 "nvme_iov_md": false 00:04:25.810 }, 00:04:25.810 "memory_domains": [ 00:04:25.810 { 00:04:25.810 "dma_device_id": "system", 00:04:25.810 "dma_device_type": 1 00:04:25.810 }, 00:04:25.810 { 00:04:25.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.810 "dma_device_type": 2 00:04:25.810 } 00:04:25.810 ], 00:04:25.810 "driver_specific": {} 00:04:25.810 } 00:04:25.810 ]' 00:04:25.810 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:26.068 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:26.068 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:26.068 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.068 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.068 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.068 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:26.068 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.068 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.068 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.068 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:26.068 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:26.068 20:30:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:26.068 00:04:26.068 real 0m0.132s 00:04:26.068 user 0m0.082s 00:04:26.068 sys 0m0.017s 00:04:26.068 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.068 20:30:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.068 ************************************ 00:04:26.068 END TEST rpc_plugins 00:04:26.068 ************************************ 00:04:26.068 20:30:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.068 20:30:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:26.068 20:30:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.068 20:30:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.068 20:30:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.068 ************************************ 00:04:26.068 START TEST rpc_trace_cmd_test 00:04:26.068 ************************************ 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:26.068 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2504086", 00:04:26.068 "tpoint_group_mask": "0x8", 00:04:26.068 "iscsi_conn": { 00:04:26.068 "mask": "0x2", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "scsi": { 00:04:26.068 "mask": "0x4", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "bdev": { 00:04:26.068 "mask": "0x8", 00:04:26.068 "tpoint_mask": "0xffffffffffffffff" 00:04:26.068 }, 00:04:26.068 "nvmf_rdma": { 00:04:26.068 "mask": "0x10", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "nvmf_tcp": { 00:04:26.068 "mask": "0x20", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "ftl": { 00:04:26.068 "mask": "0x40", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "blobfs": { 00:04:26.068 "mask": "0x80", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "dsa": { 00:04:26.068 "mask": "0x200", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "thread": { 00:04:26.068 "mask": "0x400", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "nvme_pcie": { 00:04:26.068 "mask": "0x800", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "iaa": { 00:04:26.068 "mask": "0x1000", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "nvme_tcp": { 00:04:26.068 "mask": "0x2000", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "bdev_nvme": { 00:04:26.068 "mask": "0x4000", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 }, 00:04:26.068 "sock": { 00:04:26.068 "mask": "0x8000", 00:04:26.068 "tpoint_mask": "0x0" 00:04:26.068 } 00:04:26.068 }' 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:26.068 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:26.069 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.069 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.069 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.327 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.327 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.327 20:30:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.327 00:04:26.327 real 0m0.184s 00:04:26.327 user 0m0.163s 00:04:26.327 sys 0m0.014s 00:04:26.327 20:30:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.327 20:30:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.327 ************************************ 00:04:26.327 END TEST rpc_trace_cmd_test 00:04:26.327 ************************************ 00:04:26.327 20:30:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.327 20:30:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.327 20:30:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.327 20:30:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.327 20:30:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.327 20:30:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.327 20:30:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.327 ************************************ 00:04:26.327 START TEST rpc_daemon_integrity 00:04:26.327 ************************************ 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.327 { 00:04:26.327 "name": "Malloc2", 00:04:26.327 "aliases": [ 00:04:26.327 "8fda3a31-21a7-40ea-9cf4-b23cb76f8905" 00:04:26.327 ], 00:04:26.327 "product_name": "Malloc disk", 00:04:26.327 "block_size": 512, 00:04:26.327 "num_blocks": 16384, 00:04:26.327 "uuid": "8fda3a31-21a7-40ea-9cf4-b23cb76f8905", 00:04:26.327 "assigned_rate_limits": { 00:04:26.327 "rw_ios_per_sec": 0, 00:04:26.327 "rw_mbytes_per_sec": 0, 00:04:26.327 "r_mbytes_per_sec": 0, 00:04:26.327 "w_mbytes_per_sec": 0 00:04:26.327 }, 00:04:26.327 "claimed": false, 00:04:26.327 "zoned": false, 00:04:26.327 "supported_io_types": { 00:04:26.327 "read": true, 00:04:26.327 "write": true, 00:04:26.327 "unmap": true, 00:04:26.327 "flush": true, 00:04:26.327 "reset": true, 00:04:26.327 "nvme_admin": false, 00:04:26.327 "nvme_io": false, 00:04:26.327 "nvme_io_md": false, 00:04:26.327 "write_zeroes": true, 00:04:26.327 "zcopy": true, 00:04:26.327 "get_zone_info": false, 00:04:26.327 "zone_management": false, 00:04:26.327 "zone_append": false, 00:04:26.327 "compare": false, 00:04:26.327 "compare_and_write": false, 00:04:26.327 "abort": true, 00:04:26.327 "seek_hole": false, 00:04:26.327 "seek_data": false, 00:04:26.327 "copy": true, 00:04:26.327 "nvme_iov_md": false 00:04:26.327 }, 00:04:26.327 "memory_domains": [ 00:04:26.327 { 00:04:26.327 "dma_device_id": "system", 00:04:26.327 "dma_device_type": 1 00:04:26.327 }, 00:04:26.327 { 00:04:26.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.327 "dma_device_type": 2 00:04:26.327 } 00:04:26.327 ], 00:04:26.327 "driver_specific": {} 00:04:26.327 } 00:04:26.327 ]' 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.327 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.327 [2024-07-15 20:30:00.794509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:26.327 [2024-07-15 20:30:00.794538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.327 [2024-07-15 20:30:00.794549] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcdbac0 00:04:26.327 [2024-07-15 20:30:00.794556] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.327 [2024-07-15 20:30:00.795530] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.327 [2024-07-15 20:30:00.795550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.327 Passthru0 00:04:26.328 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.328 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.328 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.328 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.586 { 00:04:26.586 "name": "Malloc2", 00:04:26.586 "aliases": [ 00:04:26.586 "8fda3a31-21a7-40ea-9cf4-b23cb76f8905" 00:04:26.586 ], 00:04:26.586 "product_name": "Malloc disk", 00:04:26.586 "block_size": 512, 00:04:26.586 "num_blocks": 16384, 00:04:26.586 "uuid": "8fda3a31-21a7-40ea-9cf4-b23cb76f8905", 00:04:26.586 "assigned_rate_limits": { 00:04:26.586 "rw_ios_per_sec": 0, 00:04:26.586 "rw_mbytes_per_sec": 0, 00:04:26.586 "r_mbytes_per_sec": 0, 00:04:26.586 "w_mbytes_per_sec": 0 00:04:26.586 }, 00:04:26.586 "claimed": true, 00:04:26.586 "claim_type": "exclusive_write", 00:04:26.586 "zoned": false, 00:04:26.586 "supported_io_types": { 00:04:26.586 "read": true, 00:04:26.586 "write": true, 00:04:26.586 "unmap": true, 00:04:26.586 "flush": true, 00:04:26.586 "reset": true, 00:04:26.586 "nvme_admin": false, 00:04:26.586 "nvme_io": false, 00:04:26.586 "nvme_io_md": false, 00:04:26.586 "write_zeroes": true, 00:04:26.586 "zcopy": true, 00:04:26.586 "get_zone_info": false, 00:04:26.586 "zone_management": false, 00:04:26.586 "zone_append": false, 00:04:26.586 "compare": false, 00:04:26.586 "compare_and_write": false, 00:04:26.586 "abort": true, 00:04:26.586 "seek_hole": false, 00:04:26.586 "seek_data": false, 00:04:26.586 "copy": true, 00:04:26.586 "nvme_iov_md": false 00:04:26.586 }, 00:04:26.586 "memory_domains": [ 00:04:26.586 { 00:04:26.586 "dma_device_id": "system", 00:04:26.586 "dma_device_type": 1 00:04:26.586 }, 00:04:26.586 { 00:04:26.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.586 "dma_device_type": 2 00:04:26.586 } 00:04:26.586 ], 00:04:26.586 "driver_specific": {} 00:04:26.586 }, 00:04:26.586 { 00:04:26.586 "name": "Passthru0", 00:04:26.586 "aliases": [ 00:04:26.586 "b655f9da-203e-56a8-b668-005927f9d9c8" 00:04:26.586 ], 00:04:26.586 "product_name": "passthru", 00:04:26.586 "block_size": 512, 00:04:26.586 "num_blocks": 16384, 00:04:26.586 "uuid": "b655f9da-203e-56a8-b668-005927f9d9c8", 00:04:26.586 "assigned_rate_limits": { 00:04:26.586 "rw_ios_per_sec": 0, 00:04:26.586 "rw_mbytes_per_sec": 0, 00:04:26.586 "r_mbytes_per_sec": 0, 00:04:26.586 "w_mbytes_per_sec": 0 00:04:26.586 }, 00:04:26.586 "claimed": false, 00:04:26.586 "zoned": false, 00:04:26.586 "supported_io_types": { 00:04:26.586 "read": true, 00:04:26.586 "write": true, 00:04:26.586 "unmap": true, 00:04:26.586 "flush": true, 00:04:26.586 "reset": true, 00:04:26.586 "nvme_admin": false, 00:04:26.586 "nvme_io": false, 00:04:26.586 "nvme_io_md": false, 00:04:26.586 "write_zeroes": true, 00:04:26.586 "zcopy": true, 00:04:26.586 "get_zone_info": false, 00:04:26.586 "zone_management": false, 00:04:26.586 "zone_append": false, 00:04:26.586 "compare": false, 00:04:26.586 "compare_and_write": false, 00:04:26.586 "abort": true, 00:04:26.586 "seek_hole": false, 00:04:26.586 "seek_data": false, 00:04:26.586 "copy": true, 00:04:26.586 "nvme_iov_md": false 00:04:26.586 }, 00:04:26.586 "memory_domains": [ 00:04:26.586 { 00:04:26.586 "dma_device_id": "system", 00:04:26.586 "dma_device_type": 1 00:04:26.586 }, 00:04:26.586 { 00:04:26.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.586 "dma_device_type": 2 00:04:26.586 } 00:04:26.586 ], 00:04:26.586 "driver_specific": { 00:04:26.586 "passthru": { 00:04:26.586 "name": "Passthru0", 00:04:26.586 "base_bdev_name": "Malloc2" 00:04:26.586 } 00:04:26.586 } 00:04:26.586 } 00:04:26.586 ]' 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.586 00:04:26.586 real 0m0.267s 00:04:26.586 user 0m0.175s 00:04:26.586 sys 0m0.031s 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.586 20:30:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.586 ************************************ 00:04:26.586 END TEST rpc_daemon_integrity 00:04:26.586 ************************************ 00:04:26.586 20:30:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.586 20:30:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:26.586 20:30:00 rpc -- rpc/rpc.sh@84 -- # killprocess 2504086 00:04:26.586 20:30:00 rpc -- common/autotest_common.sh@948 -- # '[' -z 2504086 ']' 00:04:26.586 20:30:00 rpc -- common/autotest_common.sh@952 -- # kill -0 2504086 00:04:26.586 20:30:00 rpc -- common/autotest_common.sh@953 -- # uname 00:04:26.586 20:30:00 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.586 20:30:00 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2504086 00:04:26.586 20:30:01 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.586 20:30:01 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.586 20:30:01 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2504086' 00:04:26.586 killing process with pid 2504086 00:04:26.586 20:30:01 rpc -- common/autotest_common.sh@967 -- # kill 2504086 00:04:26.586 20:30:01 rpc -- common/autotest_common.sh@972 -- # wait 2504086 00:04:26.844 00:04:26.844 real 0m2.323s 00:04:26.844 user 0m2.973s 00:04:26.844 sys 0m0.618s 00:04:26.844 20:30:01 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.844 20:30:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.844 ************************************ 00:04:26.844 END TEST rpc 00:04:26.844 ************************************ 00:04:27.101 20:30:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.101 20:30:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:27.101 20:30:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.101 20:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.101 20:30:01 -- common/autotest_common.sh@10 -- # set +x 00:04:27.101 ************************************ 00:04:27.101 START TEST skip_rpc 00:04:27.101 ************************************ 00:04:27.101 20:30:01 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:27.101 * Looking for test storage... 00:04:27.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.101 20:30:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:27.101 20:30:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.101 20:30:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:27.101 20:30:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.101 20:30:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.101 20:30:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.101 ************************************ 00:04:27.101 START TEST skip_rpc 00:04:27.101 ************************************ 00:04:27.101 20:30:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:27.101 20:30:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.101 20:30:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2504817 00:04:27.101 20:30:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.101 20:30:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.101 [2024-07-15 20:30:01.533999] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:27.101 [2024-07-15 20:30:01.534043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504817 ] 00:04:27.101 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.101 [2024-07-15 20:30:01.583312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.359 [2024-07-15 20:30:01.656827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2504817 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2504817 ']' 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2504817 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2504817 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2504817' 00:04:32.617 killing process with pid 2504817 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2504817 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2504817 00:04:32.617 00:04:32.617 real 0m5.360s 00:04:32.617 user 0m5.147s 00:04:32.617 sys 0m0.238s 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.617 20:30:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.617 ************************************ 00:04:32.617 END TEST skip_rpc 00:04:32.617 ************************************ 00:04:32.617 20:30:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.617 20:30:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.617 20:30:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.617 20:30:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.617 20:30:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.617 ************************************ 00:04:32.617 START TEST skip_rpc_with_json 00:04:32.617 ************************************ 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2505791 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2505791 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2505791 ']' 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.617 20:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.617 [2024-07-15 20:30:06.966559] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:32.617 [2024-07-15 20:30:06.966603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505791 ] 00:04:32.617 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.617 [2024-07-15 20:30:07.018739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.618 [2024-07-15 20:30:07.097446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.560 [2024-07-15 20:30:07.754319] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.560 request: 00:04:33.560 { 00:04:33.560 "trtype": "tcp", 00:04:33.560 "method": "nvmf_get_transports", 00:04:33.560 "req_id": 1 00:04:33.560 } 00:04:33.560 Got JSON-RPC error response 00:04:33.560 response: 00:04:33.560 { 00:04:33.560 "code": -19, 00:04:33.560 "message": "No such device" 00:04:33.560 } 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.560 [2024-07-15 20:30:07.766424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.560 20:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.560 { 00:04:33.560 "subsystems": [ 00:04:33.560 { 00:04:33.560 "subsystem": "vfio_user_target", 00:04:33.560 "config": null 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "subsystem": "keyring", 00:04:33.560 "config": [] 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "subsystem": "iobuf", 00:04:33.560 "config": [ 00:04:33.560 { 00:04:33.560 "method": "iobuf_set_options", 00:04:33.560 "params": { 00:04:33.560 "small_pool_count": 8192, 00:04:33.560 "large_pool_count": 1024, 00:04:33.560 "small_bufsize": 8192, 00:04:33.560 "large_bufsize": 135168 00:04:33.560 } 00:04:33.560 } 00:04:33.560 ] 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "subsystem": "sock", 00:04:33.560 "config": [ 00:04:33.560 { 00:04:33.560 "method": "sock_set_default_impl", 00:04:33.560 "params": { 00:04:33.560 "impl_name": "posix" 00:04:33.560 } 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "method": "sock_impl_set_options", 00:04:33.560 "params": { 00:04:33.560 "impl_name": "ssl", 00:04:33.560 "recv_buf_size": 4096, 00:04:33.560 "send_buf_size": 4096, 00:04:33.560 "enable_recv_pipe": true, 00:04:33.560 "enable_quickack": false, 00:04:33.560 "enable_placement_id": 0, 00:04:33.560 "enable_zerocopy_send_server": true, 00:04:33.560 "enable_zerocopy_send_client": false, 00:04:33.560 "zerocopy_threshold": 0, 00:04:33.560 "tls_version": 0, 00:04:33.560 "enable_ktls": false 00:04:33.560 } 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "method": "sock_impl_set_options", 00:04:33.560 "params": { 00:04:33.560 "impl_name": "posix", 00:04:33.560 "recv_buf_size": 2097152, 00:04:33.560 "send_buf_size": 2097152, 00:04:33.560 "enable_recv_pipe": true, 00:04:33.560 "enable_quickack": false, 00:04:33.560 "enable_placement_id": 0, 00:04:33.560 "enable_zerocopy_send_server": true, 00:04:33.560 "enable_zerocopy_send_client": false, 00:04:33.560 "zerocopy_threshold": 0, 00:04:33.560 "tls_version": 0, 00:04:33.560 "enable_ktls": false 00:04:33.560 } 00:04:33.560 } 00:04:33.560 ] 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "subsystem": "vmd", 00:04:33.560 "config": [] 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "subsystem": "accel", 00:04:33.560 "config": [ 00:04:33.560 { 00:04:33.560 "method": "accel_set_options", 00:04:33.560 "params": { 00:04:33.560 "small_cache_size": 128, 00:04:33.560 "large_cache_size": 16, 00:04:33.560 "task_count": 2048, 00:04:33.560 "sequence_count": 2048, 00:04:33.560 "buf_count": 2048 00:04:33.560 } 00:04:33.560 } 00:04:33.560 ] 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "subsystem": "bdev", 00:04:33.560 "config": [ 00:04:33.560 { 00:04:33.560 "method": "bdev_set_options", 00:04:33.560 "params": { 00:04:33.560 "bdev_io_pool_size": 65535, 00:04:33.560 "bdev_io_cache_size": 256, 00:04:33.560 "bdev_auto_examine": true, 00:04:33.560 "iobuf_small_cache_size": 128, 00:04:33.560 "iobuf_large_cache_size": 16 00:04:33.560 } 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "method": "bdev_raid_set_options", 00:04:33.560 "params": { 00:04:33.560 "process_window_size_kb": 1024 00:04:33.560 } 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "method": "bdev_iscsi_set_options", 00:04:33.560 "params": { 00:04:33.560 "timeout_sec": 30 00:04:33.560 } 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "method": "bdev_nvme_set_options", 00:04:33.560 "params": { 00:04:33.560 "action_on_timeout": "none", 00:04:33.560 "timeout_us": 0, 00:04:33.560 "timeout_admin_us": 0, 00:04:33.560 "keep_alive_timeout_ms": 10000, 00:04:33.560 "arbitration_burst": 0, 00:04:33.560 "low_priority_weight": 0, 00:04:33.560 "medium_priority_weight": 0, 00:04:33.560 "high_priority_weight": 0, 00:04:33.560 "nvme_adminq_poll_period_us": 10000, 00:04:33.560 "nvme_ioq_poll_period_us": 0, 00:04:33.560 "io_queue_requests": 0, 00:04:33.560 "delay_cmd_submit": true, 00:04:33.560 "transport_retry_count": 4, 00:04:33.560 "bdev_retry_count": 3, 00:04:33.560 "transport_ack_timeout": 0, 00:04:33.560 "ctrlr_loss_timeout_sec": 0, 00:04:33.560 "reconnect_delay_sec": 0, 00:04:33.560 "fast_io_fail_timeout_sec": 0, 00:04:33.560 "disable_auto_failback": false, 00:04:33.560 "generate_uuids": false, 00:04:33.560 "transport_tos": 0, 00:04:33.560 "nvme_error_stat": false, 00:04:33.560 "rdma_srq_size": 0, 00:04:33.560 "io_path_stat": false, 00:04:33.560 "allow_accel_sequence": false, 00:04:33.560 "rdma_max_cq_size": 0, 00:04:33.560 "rdma_cm_event_timeout_ms": 0, 00:04:33.560 "dhchap_digests": [ 00:04:33.560 "sha256", 00:04:33.560 "sha384", 00:04:33.560 "sha512" 00:04:33.560 ], 00:04:33.560 "dhchap_dhgroups": [ 00:04:33.560 "null", 00:04:33.560 "ffdhe2048", 00:04:33.560 "ffdhe3072", 00:04:33.560 "ffdhe4096", 00:04:33.560 "ffdhe6144", 00:04:33.560 "ffdhe8192" 00:04:33.560 ] 00:04:33.560 } 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "method": "bdev_nvme_set_hotplug", 00:04:33.560 "params": { 00:04:33.560 "period_us": 100000, 00:04:33.560 "enable": false 00:04:33.560 } 00:04:33.560 }, 00:04:33.560 { 00:04:33.560 "method": "bdev_wait_for_examine" 00:04:33.560 } 00:04:33.560 ] 00:04:33.560 }, 00:04:33.560 { 00:04:33.561 "subsystem": "scsi", 00:04:33.561 "config": null 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "subsystem": "scheduler", 00:04:33.561 "config": [ 00:04:33.561 { 00:04:33.561 "method": "framework_set_scheduler", 00:04:33.561 "params": { 00:04:33.561 "name": "static" 00:04:33.561 } 00:04:33.561 } 00:04:33.561 ] 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "subsystem": "vhost_scsi", 00:04:33.561 "config": [] 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "subsystem": "vhost_blk", 00:04:33.561 "config": [] 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "subsystem": "ublk", 00:04:33.561 "config": [] 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "subsystem": "nbd", 00:04:33.561 "config": [] 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "subsystem": "nvmf", 00:04:33.561 "config": [ 00:04:33.561 { 00:04:33.561 "method": "nvmf_set_config", 00:04:33.561 "params": { 00:04:33.561 "discovery_filter": "match_any", 00:04:33.561 "admin_cmd_passthru": { 00:04:33.561 "identify_ctrlr": false 00:04:33.561 } 00:04:33.561 } 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "method": "nvmf_set_max_subsystems", 00:04:33.561 "params": { 00:04:33.561 "max_subsystems": 1024 00:04:33.561 } 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "method": "nvmf_set_crdt", 00:04:33.561 "params": { 00:04:33.561 "crdt1": 0, 00:04:33.561 "crdt2": 0, 00:04:33.561 "crdt3": 0 00:04:33.561 } 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "method": "nvmf_create_transport", 00:04:33.561 "params": { 00:04:33.561 "trtype": "TCP", 00:04:33.561 "max_queue_depth": 128, 00:04:33.561 "max_io_qpairs_per_ctrlr": 127, 00:04:33.561 "in_capsule_data_size": 4096, 00:04:33.561 "max_io_size": 131072, 00:04:33.561 "io_unit_size": 131072, 00:04:33.561 "max_aq_depth": 128, 00:04:33.561 "num_shared_buffers": 511, 00:04:33.561 "buf_cache_size": 4294967295, 00:04:33.561 "dif_insert_or_strip": false, 00:04:33.561 "zcopy": false, 00:04:33.561 "c2h_success": true, 00:04:33.561 "sock_priority": 0, 00:04:33.561 "abort_timeout_sec": 1, 00:04:33.561 "ack_timeout": 0, 00:04:33.561 "data_wr_pool_size": 0 00:04:33.561 } 00:04:33.561 } 00:04:33.561 ] 00:04:33.561 }, 00:04:33.561 { 00:04:33.561 "subsystem": "iscsi", 00:04:33.561 "config": [ 00:04:33.561 { 00:04:33.561 "method": "iscsi_set_options", 00:04:33.561 "params": { 00:04:33.561 "node_base": "iqn.2016-06.io.spdk", 00:04:33.561 "max_sessions": 128, 00:04:33.561 "max_connections_per_session": 2, 00:04:33.561 "max_queue_depth": 64, 00:04:33.561 "default_time2wait": 2, 00:04:33.561 "default_time2retain": 20, 00:04:33.561 "first_burst_length": 8192, 00:04:33.561 "immediate_data": true, 00:04:33.561 "allow_duplicated_isid": false, 00:04:33.561 "error_recovery_level": 0, 00:04:33.561 "nop_timeout": 60, 00:04:33.561 "nop_in_interval": 30, 00:04:33.561 "disable_chap": false, 00:04:33.561 "require_chap": false, 00:04:33.561 "mutual_chap": false, 00:04:33.561 "chap_group": 0, 00:04:33.561 "max_large_datain_per_connection": 64, 00:04:33.561 "max_r2t_per_connection": 4, 00:04:33.561 "pdu_pool_size": 36864, 00:04:33.561 "immediate_data_pool_size": 16384, 00:04:33.561 "data_out_pool_size": 2048 00:04:33.561 } 00:04:33.561 } 00:04:33.561 ] 00:04:33.561 } 00:04:33.561 ] 00:04:33.561 } 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2505791 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2505791 ']' 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2505791 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2505791 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2505791' 00:04:33.561 killing process with pid 2505791 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2505791 00:04:33.561 20:30:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2505791 00:04:33.883 20:30:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2506156 00:04:33.883 20:30:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.883 20:30:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2506156 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2506156 ']' 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2506156 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2506156 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2506156' 00:04:39.143 killing process with pid 2506156 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2506156 00:04:39.143 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2506156 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.401 00:04:39.401 real 0m6.729s 00:04:39.401 user 0m6.576s 00:04:39.401 sys 0m0.556s 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.401 ************************************ 00:04:39.401 END TEST skip_rpc_with_json 00:04:39.401 ************************************ 00:04:39.401 20:30:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.401 20:30:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:39.401 20:30:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.401 20:30:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.401 20:30:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.401 ************************************ 00:04:39.401 START TEST skip_rpc_with_delay 00:04:39.401 ************************************ 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.401 [2024-07-15 20:30:13.762717] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.401 [2024-07-15 20:30:13.762785] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:39.401 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:39.401 00:04:39.401 real 0m0.064s 00:04:39.402 user 0m0.042s 00:04:39.402 sys 0m0.021s 00:04:39.402 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.402 20:30:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:39.402 ************************************ 00:04:39.402 END TEST skip_rpc_with_delay 00:04:39.402 ************************************ 00:04:39.402 20:30:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.402 20:30:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:39.402 20:30:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:39.402 20:30:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:39.402 20:30:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.402 20:30:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.402 20:30:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.402 ************************************ 00:04:39.402 START TEST exit_on_failed_rpc_init 00:04:39.402 ************************************ 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2507398 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2507398 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2507398 ']' 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.402 20:30:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.659 [2024-07-15 20:30:13.892819] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:39.659 [2024-07-15 20:30:13.892856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507398 ] 00:04:39.659 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.659 [2024-07-15 20:30:13.945035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.659 [2024-07-15 20:30:14.024371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.224 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.482 [2024-07-15 20:30:14.744986] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:40.483 [2024-07-15 20:30:14.745033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507616 ] 00:04:40.483 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.483 [2024-07-15 20:30:14.796525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.483 [2024-07-15 20:30:14.868818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.483 [2024-07-15 20:30:14.868884] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.483 [2024-07-15 20:30:14.868893] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.483 [2024-07-15 20:30:14.868899] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2507398 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2507398 ']' 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2507398 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.483 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2507398 00:04:40.741 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.741 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.741 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2507398' 00:04:40.741 killing process with pid 2507398 00:04:40.741 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2507398 00:04:40.741 20:30:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2507398 00:04:40.999 00:04:40.999 real 0m1.436s 00:04:40.999 user 0m1.664s 00:04:40.999 sys 0m0.380s 00:04:40.999 20:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.999 20:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.999 ************************************ 00:04:40.999 END TEST exit_on_failed_rpc_init 00:04:40.999 ************************************ 00:04:40.999 20:30:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.999 20:30:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.999 00:04:40.999 real 0m13.932s 00:04:40.999 user 0m13.567s 00:04:40.999 sys 0m1.427s 00:04:40.999 20:30:15 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.999 20:30:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.999 ************************************ 00:04:40.999 END TEST skip_rpc 00:04:40.999 ************************************ 00:04:40.999 20:30:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.999 20:30:15 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.999 20:30:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.999 20:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.999 20:30:15 -- common/autotest_common.sh@10 -- # set +x 00:04:40.999 ************************************ 00:04:40.999 START TEST rpc_client 00:04:40.999 ************************************ 00:04:40.999 20:30:15 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.999 * Looking for test storage... 00:04:40.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:40.999 20:30:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:41.257 OK 00:04:41.257 20:30:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.257 00:04:41.257 real 0m0.109s 00:04:41.257 user 0m0.053s 00:04:41.257 sys 0m0.064s 00:04:41.257 20:30:15 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.257 20:30:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.257 ************************************ 00:04:41.257 END TEST rpc_client 00:04:41.257 ************************************ 00:04:41.257 20:30:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.257 20:30:15 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:41.257 20:30:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.257 20:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.257 20:30:15 -- common/autotest_common.sh@10 -- # set +x 00:04:41.257 ************************************ 00:04:41.257 START TEST json_config 00:04:41.257 ************************************ 00:04:41.257 20:30:15 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:41.257 20:30:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.257 20:30:15 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.258 20:30:15 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.258 20:30:15 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.258 20:30:15 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.258 20:30:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.258 20:30:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.258 20:30:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.258 20:30:15 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.258 20:30:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@47 -- # : 0 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.258 20:30:15 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:41.258 INFO: JSON configuration test init 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.258 20:30:15 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:41.258 20:30:15 json_config -- json_config/common.sh@9 -- # local app=target 00:04:41.258 20:30:15 json_config -- json_config/common.sh@10 -- # shift 00:04:41.258 20:30:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.258 20:30:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.258 20:30:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.258 20:30:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.258 20:30:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.258 20:30:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2507752 00:04:41.258 20:30:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.258 Waiting for target to run... 00:04:41.258 20:30:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:41.258 20:30:15 json_config -- json_config/common.sh@25 -- # waitforlisten 2507752 /var/tmp/spdk_tgt.sock 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@829 -- # '[' -z 2507752 ']' 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.258 20:30:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.258 [2024-07-15 20:30:15.697476] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:41.258 [2024-07-15 20:30:15.697522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507752 ] 00:04:41.258 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.515 [2024-07-15 20:30:15.963388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.770 [2024-07-15 20:30:16.029341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.026 20:30:16 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.026 20:30:16 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:42.026 20:30:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:42.026 00:04:42.026 20:30:16 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:42.026 20:30:16 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:42.026 20:30:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.026 20:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.026 20:30:16 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:42.282 20:30:16 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:42.282 20:30:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.282 20:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.282 20:30:16 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:42.282 20:30:16 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:42.282 20:30:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:45.563 20:30:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.563 20:30:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:45.563 20:30:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:45.563 20:30:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.563 20:30:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:45.563 20:30:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.563 20:30:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:45.563 20:30:19 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:45.563 20:30:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:45.563 MallocForNvmf0 00:04:45.563 20:30:20 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:45.563 20:30:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:45.821 MallocForNvmf1 00:04:45.821 20:30:20 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:45.821 20:30:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.078 [2024-07-15 20:30:20.344479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.078 20:30:20 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.079 20:30:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.079 20:30:20 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.079 20:30:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.358 20:30:20 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.358 20:30:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.615 20:30:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:46.615 20:30:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:46.615 [2024-07-15 20:30:21.026635] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:46.615 20:30:21 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:46.615 20:30:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.615 20:30:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.615 20:30:21 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:46.615 20:30:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.615 20:30:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.873 20:30:21 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:46.873 20:30:21 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.873 20:30:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.873 MallocBdevForConfigChangeCheck 00:04:46.873 20:30:21 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:46.873 20:30:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.873 20:30:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.873 20:30:21 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:46.873 20:30:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.446 20:30:21 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:47.446 INFO: shutting down applications... 00:04:47.446 20:30:21 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:47.446 20:30:21 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:47.446 20:30:21 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:47.446 20:30:21 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:48.818 Calling clear_iscsi_subsystem 00:04:48.818 Calling clear_nvmf_subsystem 00:04:48.818 Calling clear_nbd_subsystem 00:04:48.818 Calling clear_ublk_subsystem 00:04:48.818 Calling clear_vhost_blk_subsystem 00:04:48.818 Calling clear_vhost_scsi_subsystem 00:04:48.818 Calling clear_bdev_subsystem 00:04:48.818 20:30:23 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:48.818 20:30:23 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:48.818 20:30:23 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:48.818 20:30:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:48.818 20:30:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.818 20:30:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:49.076 20:30:23 json_config -- json_config/json_config.sh@345 -- # break 00:04:49.076 20:30:23 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:49.076 20:30:23 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:49.076 20:30:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:49.076 20:30:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.076 20:30:23 json_config -- json_config/common.sh@35 -- # [[ -n 2507752 ]] 00:04:49.076 20:30:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2507752 00:04:49.076 20:30:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.076 20:30:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.076 20:30:23 json_config -- json_config/common.sh@41 -- # kill -0 2507752 00:04:49.076 20:30:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.643 20:30:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.643 20:30:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.643 20:30:24 json_config -- json_config/common.sh@41 -- # kill -0 2507752 00:04:49.643 20:30:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.643 20:30:24 json_config -- json_config/common.sh@43 -- # break 00:04:49.643 20:30:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.643 20:30:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.643 SPDK target shutdown done 00:04:49.643 20:30:24 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:49.643 INFO: relaunching applications... 00:04:49.643 20:30:24 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.643 20:30:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:49.643 20:30:24 json_config -- json_config/common.sh@10 -- # shift 00:04:49.643 20:30:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.643 20:30:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.643 20:30:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.643 20:30:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.643 20:30:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.643 20:30:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2509304 00:04:49.643 20:30:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.643 20:30:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.643 Waiting for target to run... 00:04:49.643 20:30:24 json_config -- json_config/common.sh@25 -- # waitforlisten 2509304 /var/tmp/spdk_tgt.sock 00:04:49.643 20:30:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 2509304 ']' 00:04:49.643 20:30:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.643 20:30:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.643 20:30:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.643 20:30:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.643 20:30:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.643 [2024-07-15 20:30:24.079399] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:49.643 [2024-07-15 20:30:24.079457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509304 ] 00:04:49.643 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.210 [2024-07-15 20:30:24.513871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.210 [2024-07-15 20:30:24.600121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.495 [2024-07-15 20:30:27.614898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.495 [2024-07-15 20:30:27.647201] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.752 20:30:28 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.752 20:30:28 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:53.752 20:30:28 json_config -- json_config/common.sh@26 -- # echo '' 00:04:53.752 00:04:53.752 20:30:28 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:53.752 20:30:28 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:53.752 INFO: Checking if target configuration is the same... 00:04:53.752 20:30:28 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.010 20:30:28 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:54.010 20:30:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.010 + '[' 2 -ne 2 ']' 00:04:54.010 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:54.010 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:54.010 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:54.010 +++ basename /dev/fd/62 00:04:54.010 ++ mktemp /tmp/62.XXX 00:04:54.010 + tmp_file_1=/tmp/62.UfX 00:04:54.010 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.010 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:54.010 + tmp_file_2=/tmp/spdk_tgt_config.json.4Ou 00:04:54.010 + ret=0 00:04:54.010 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:54.269 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:54.269 + diff -u /tmp/62.UfX /tmp/spdk_tgt_config.json.4Ou 00:04:54.269 + echo 'INFO: JSON config files are the same' 00:04:54.269 INFO: JSON config files are the same 00:04:54.269 + rm /tmp/62.UfX /tmp/spdk_tgt_config.json.4Ou 00:04:54.269 + exit 0 00:04:54.269 20:30:28 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:54.269 20:30:28 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:54.269 INFO: changing configuration and checking if this can be detected... 00:04:54.269 20:30:28 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:54.269 20:30:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:54.528 20:30:28 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.528 20:30:28 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:54.528 20:30:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.528 + '[' 2 -ne 2 ']' 00:04:54.528 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:54.528 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:54.528 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:54.528 +++ basename /dev/fd/62 00:04:54.528 ++ mktemp /tmp/62.XXX 00:04:54.528 + tmp_file_1=/tmp/62.FzL 00:04:54.528 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.528 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:54.528 + tmp_file_2=/tmp/spdk_tgt_config.json.NU9 00:04:54.528 + ret=0 00:04:54.528 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:54.786 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:54.786 + diff -u /tmp/62.FzL /tmp/spdk_tgt_config.json.NU9 00:04:54.786 + ret=1 00:04:54.786 + echo '=== Start of file: /tmp/62.FzL ===' 00:04:54.786 + cat /tmp/62.FzL 00:04:54.786 + echo '=== End of file: /tmp/62.FzL ===' 00:04:54.786 + echo '' 00:04:54.786 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NU9 ===' 00:04:54.786 + cat /tmp/spdk_tgt_config.json.NU9 00:04:54.786 + echo '=== End of file: /tmp/spdk_tgt_config.json.NU9 ===' 00:04:54.786 + echo '' 00:04:54.786 + rm /tmp/62.FzL /tmp/spdk_tgt_config.json.NU9 00:04:54.786 + exit 1 00:04:54.786 20:30:29 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:54.786 INFO: configuration change detected. 00:04:54.786 20:30:29 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@317 -- # [[ -n 2509304 ]] 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.787 20:30:29 json_config -- json_config/json_config.sh@323 -- # killprocess 2509304 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@948 -- # '[' -z 2509304 ']' 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@952 -- # kill -0 2509304 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@953 -- # uname 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2509304 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2509304' 00:04:54.787 killing process with pid 2509304 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@967 -- # kill 2509304 00:04:54.787 20:30:29 json_config -- common/autotest_common.sh@972 -- # wait 2509304 00:04:56.694 20:30:30 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.694 20:30:30 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:56.694 20:30:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.694 20:30:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.694 20:30:30 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:56.694 20:30:30 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:56.694 INFO: Success 00:04:56.694 00:04:56.694 real 0m15.215s 00:04:56.694 user 0m16.017s 00:04:56.694 sys 0m1.812s 00:04:56.694 20:30:30 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.694 20:30:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.694 ************************************ 00:04:56.694 END TEST json_config 00:04:56.694 ************************************ 00:04:56.694 20:30:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.694 20:30:30 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:56.694 20:30:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.694 20:30:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.694 20:30:30 -- common/autotest_common.sh@10 -- # set +x 00:04:56.694 ************************************ 00:04:56.694 START TEST json_config_extra_key 00:04:56.694 ************************************ 00:04:56.694 20:30:30 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.694 20:30:30 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.694 20:30:30 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.694 20:30:30 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.694 20:30:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.694 20:30:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.694 20:30:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.694 20:30:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:56.694 20:30:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.694 20:30:30 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:56.694 INFO: launching applications... 00:04:56.694 20:30:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2510662 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.694 Waiting for target to run... 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2510662 /var/tmp/spdk_tgt.sock 00:04:56.694 20:30:30 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2510662 ']' 00:04:56.694 20:30:30 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:56.694 20:30:30 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.694 20:30:30 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.694 20:30:30 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.694 20:30:30 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.694 20:30:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.694 [2024-07-15 20:30:30.984931] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:56.694 [2024-07-15 20:30:30.984986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2510662 ] 00:04:56.694 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.017 [2024-07-15 20:30:31.418093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.286 [2024-07-15 20:30:31.513645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.545 20:30:31 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.545 20:30:31 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:57.545 00:04:57.545 20:30:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:57.545 INFO: shutting down applications... 00:04:57.545 20:30:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2510662 ]] 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2510662 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2510662 00:04:57.545 20:30:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.113 20:30:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.113 20:30:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.113 20:30:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2510662 00:04:58.113 20:30:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:58.113 20:30:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:58.113 20:30:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:58.113 20:30:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:58.113 SPDK target shutdown done 00:04:58.113 20:30:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:58.113 Success 00:04:58.113 00:04:58.113 real 0m1.455s 00:04:58.114 user 0m1.097s 00:04:58.114 sys 0m0.516s 00:04:58.114 20:30:32 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.114 20:30:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.114 ************************************ 00:04:58.114 END TEST json_config_extra_key 00:04:58.114 ************************************ 00:04:58.114 20:30:32 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.114 20:30:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:58.114 20:30:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.114 20:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.114 20:30:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.114 ************************************ 00:04:58.114 START TEST alias_rpc 00:04:58.114 ************************************ 00:04:58.114 20:30:32 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:58.114 * Looking for test storage... 00:04:58.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:58.114 20:30:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:58.114 20:30:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2511016 00:04:58.114 20:30:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2511016 00:04:58.114 20:30:32 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2511016 ']' 00:04:58.114 20:30:32 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.114 20:30:32 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.114 20:30:32 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.114 20:30:32 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.114 20:30:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.114 20:30:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.114 [2024-07-15 20:30:32.500006] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:58.114 [2024-07-15 20:30:32.500056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511016 ] 00:04:58.114 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.114 [2024-07-15 20:30:32.554435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.373 [2024-07-15 20:30:32.634592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.940 20:30:33 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.940 20:30:33 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:58.940 20:30:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:59.198 20:30:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2511016 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2511016 ']' 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2511016 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2511016 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2511016' 00:04:59.198 killing process with pid 2511016 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@967 -- # kill 2511016 00:04:59.198 20:30:33 alias_rpc -- common/autotest_common.sh@972 -- # wait 2511016 00:04:59.457 00:04:59.457 real 0m1.480s 00:04:59.457 user 0m1.623s 00:04:59.457 sys 0m0.386s 00:04:59.457 20:30:33 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.457 20:30:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.457 ************************************ 00:04:59.457 END TEST alias_rpc 00:04:59.457 ************************************ 00:04:59.457 20:30:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.457 20:30:33 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:59.457 20:30:33 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:59.457 20:30:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.457 20:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.457 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:04:59.457 ************************************ 00:04:59.457 START TEST spdkcli_tcp 00:04:59.457 ************************************ 00:04:59.457 20:30:33 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:59.717 * Looking for test storage... 00:04:59.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:59.717 20:30:33 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.717 20:30:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2511310 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2511310 00:04:59.717 20:30:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:59.717 20:30:33 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2511310 ']' 00:04:59.717 20:30:33 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.717 20:30:33 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.717 20:30:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.717 20:30:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.717 20:30:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.717 [2024-07-15 20:30:34.045465] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:04:59.717 [2024-07-15 20:30:34.045531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511310 ] 00:04:59.717 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.717 [2024-07-15 20:30:34.098953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.717 [2024-07-15 20:30:34.179898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.717 [2024-07-15 20:30:34.179902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.654 20:30:34 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.654 20:30:34 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:00.654 20:30:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2511364 00:05:00.654 20:30:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:00.654 20:30:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:00.654 [ 00:05:00.654 "bdev_malloc_delete", 00:05:00.654 "bdev_malloc_create", 00:05:00.654 "bdev_null_resize", 00:05:00.654 "bdev_null_delete", 00:05:00.654 "bdev_null_create", 00:05:00.654 "bdev_nvme_cuse_unregister", 00:05:00.654 "bdev_nvme_cuse_register", 00:05:00.654 "bdev_opal_new_user", 00:05:00.654 "bdev_opal_set_lock_state", 00:05:00.654 "bdev_opal_delete", 00:05:00.654 "bdev_opal_get_info", 00:05:00.654 "bdev_opal_create", 00:05:00.654 "bdev_nvme_opal_revert", 00:05:00.654 "bdev_nvme_opal_init", 00:05:00.654 "bdev_nvme_send_cmd", 00:05:00.654 "bdev_nvme_get_path_iostat", 00:05:00.654 "bdev_nvme_get_mdns_discovery_info", 00:05:00.654 "bdev_nvme_stop_mdns_discovery", 00:05:00.654 "bdev_nvme_start_mdns_discovery", 00:05:00.654 "bdev_nvme_set_multipath_policy", 00:05:00.654 "bdev_nvme_set_preferred_path", 00:05:00.654 "bdev_nvme_get_io_paths", 00:05:00.654 "bdev_nvme_remove_error_injection", 00:05:00.654 "bdev_nvme_add_error_injection", 00:05:00.655 "bdev_nvme_get_discovery_info", 00:05:00.655 "bdev_nvme_stop_discovery", 00:05:00.655 "bdev_nvme_start_discovery", 00:05:00.655 "bdev_nvme_get_controller_health_info", 00:05:00.655 "bdev_nvme_disable_controller", 00:05:00.655 "bdev_nvme_enable_controller", 00:05:00.655 "bdev_nvme_reset_controller", 00:05:00.655 "bdev_nvme_get_transport_statistics", 00:05:00.655 "bdev_nvme_apply_firmware", 00:05:00.655 "bdev_nvme_detach_controller", 00:05:00.655 "bdev_nvme_get_controllers", 00:05:00.655 "bdev_nvme_attach_controller", 00:05:00.655 "bdev_nvme_set_hotplug", 00:05:00.655 "bdev_nvme_set_options", 00:05:00.655 "bdev_passthru_delete", 00:05:00.655 "bdev_passthru_create", 00:05:00.655 "bdev_lvol_set_parent_bdev", 00:05:00.655 "bdev_lvol_set_parent", 00:05:00.655 "bdev_lvol_check_shallow_copy", 00:05:00.655 "bdev_lvol_start_shallow_copy", 00:05:00.655 "bdev_lvol_grow_lvstore", 00:05:00.655 "bdev_lvol_get_lvols", 00:05:00.655 "bdev_lvol_get_lvstores", 00:05:00.655 "bdev_lvol_delete", 00:05:00.655 "bdev_lvol_set_read_only", 00:05:00.655 "bdev_lvol_resize", 00:05:00.655 "bdev_lvol_decouple_parent", 00:05:00.655 "bdev_lvol_inflate", 00:05:00.655 "bdev_lvol_rename", 00:05:00.655 "bdev_lvol_clone_bdev", 00:05:00.655 "bdev_lvol_clone", 00:05:00.655 "bdev_lvol_snapshot", 00:05:00.655 "bdev_lvol_create", 00:05:00.655 "bdev_lvol_delete_lvstore", 00:05:00.655 "bdev_lvol_rename_lvstore", 00:05:00.655 "bdev_lvol_create_lvstore", 00:05:00.655 "bdev_raid_set_options", 00:05:00.655 "bdev_raid_remove_base_bdev", 00:05:00.655 "bdev_raid_add_base_bdev", 00:05:00.655 "bdev_raid_delete", 00:05:00.655 "bdev_raid_create", 00:05:00.655 "bdev_raid_get_bdevs", 00:05:00.655 "bdev_error_inject_error", 00:05:00.655 "bdev_error_delete", 00:05:00.655 "bdev_error_create", 00:05:00.655 "bdev_split_delete", 00:05:00.655 "bdev_split_create", 00:05:00.655 "bdev_delay_delete", 00:05:00.655 "bdev_delay_create", 00:05:00.655 "bdev_delay_update_latency", 00:05:00.655 "bdev_zone_block_delete", 00:05:00.655 "bdev_zone_block_create", 00:05:00.655 "blobfs_create", 00:05:00.655 "blobfs_detect", 00:05:00.655 "blobfs_set_cache_size", 00:05:00.655 "bdev_aio_delete", 00:05:00.655 "bdev_aio_rescan", 00:05:00.655 "bdev_aio_create", 00:05:00.655 "bdev_ftl_set_property", 00:05:00.655 "bdev_ftl_get_properties", 00:05:00.655 "bdev_ftl_get_stats", 00:05:00.655 "bdev_ftl_unmap", 00:05:00.655 "bdev_ftl_unload", 00:05:00.655 "bdev_ftl_delete", 00:05:00.655 "bdev_ftl_load", 00:05:00.655 "bdev_ftl_create", 00:05:00.655 "bdev_virtio_attach_controller", 00:05:00.655 "bdev_virtio_scsi_get_devices", 00:05:00.655 "bdev_virtio_detach_controller", 00:05:00.655 "bdev_virtio_blk_set_hotplug", 00:05:00.655 "bdev_iscsi_delete", 00:05:00.655 "bdev_iscsi_create", 00:05:00.655 "bdev_iscsi_set_options", 00:05:00.655 "accel_error_inject_error", 00:05:00.655 "ioat_scan_accel_module", 00:05:00.655 "dsa_scan_accel_module", 00:05:00.655 "iaa_scan_accel_module", 00:05:00.655 "vfu_virtio_create_scsi_endpoint", 00:05:00.655 "vfu_virtio_scsi_remove_target", 00:05:00.655 "vfu_virtio_scsi_add_target", 00:05:00.655 "vfu_virtio_create_blk_endpoint", 00:05:00.655 "vfu_virtio_delete_endpoint", 00:05:00.655 "keyring_file_remove_key", 00:05:00.655 "keyring_file_add_key", 00:05:00.655 "keyring_linux_set_options", 00:05:00.655 "iscsi_get_histogram", 00:05:00.655 "iscsi_enable_histogram", 00:05:00.655 "iscsi_set_options", 00:05:00.655 "iscsi_get_auth_groups", 00:05:00.655 "iscsi_auth_group_remove_secret", 00:05:00.655 "iscsi_auth_group_add_secret", 00:05:00.655 "iscsi_delete_auth_group", 00:05:00.655 "iscsi_create_auth_group", 00:05:00.655 "iscsi_set_discovery_auth", 00:05:00.655 "iscsi_get_options", 00:05:00.655 "iscsi_target_node_request_logout", 00:05:00.655 "iscsi_target_node_set_redirect", 00:05:00.655 "iscsi_target_node_set_auth", 00:05:00.655 "iscsi_target_node_add_lun", 00:05:00.655 "iscsi_get_stats", 00:05:00.655 "iscsi_get_connections", 00:05:00.655 "iscsi_portal_group_set_auth", 00:05:00.655 "iscsi_start_portal_group", 00:05:00.655 "iscsi_delete_portal_group", 00:05:00.655 "iscsi_create_portal_group", 00:05:00.655 "iscsi_get_portal_groups", 00:05:00.655 "iscsi_delete_target_node", 00:05:00.655 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.655 "iscsi_target_node_add_pg_ig_maps", 00:05:00.655 "iscsi_create_target_node", 00:05:00.655 "iscsi_get_target_nodes", 00:05:00.655 "iscsi_delete_initiator_group", 00:05:00.655 "iscsi_initiator_group_remove_initiators", 00:05:00.655 "iscsi_initiator_group_add_initiators", 00:05:00.655 "iscsi_create_initiator_group", 00:05:00.655 "iscsi_get_initiator_groups", 00:05:00.655 "nvmf_set_crdt", 00:05:00.655 "nvmf_set_config", 00:05:00.655 "nvmf_set_max_subsystems", 00:05:00.655 "nvmf_stop_mdns_prr", 00:05:00.655 "nvmf_publish_mdns_prr", 00:05:00.655 "nvmf_subsystem_get_listeners", 00:05:00.655 "nvmf_subsystem_get_qpairs", 00:05:00.655 "nvmf_subsystem_get_controllers", 00:05:00.655 "nvmf_get_stats", 00:05:00.655 "nvmf_get_transports", 00:05:00.655 "nvmf_create_transport", 00:05:00.655 "nvmf_get_targets", 00:05:00.655 "nvmf_delete_target", 00:05:00.655 "nvmf_create_target", 00:05:00.655 "nvmf_subsystem_allow_any_host", 00:05:00.655 "nvmf_subsystem_remove_host", 00:05:00.655 "nvmf_subsystem_add_host", 00:05:00.655 "nvmf_ns_remove_host", 00:05:00.655 "nvmf_ns_add_host", 00:05:00.655 "nvmf_subsystem_remove_ns", 00:05:00.655 "nvmf_subsystem_add_ns", 00:05:00.655 "nvmf_subsystem_listener_set_ana_state", 00:05:00.655 "nvmf_discovery_get_referrals", 00:05:00.655 "nvmf_discovery_remove_referral", 00:05:00.655 "nvmf_discovery_add_referral", 00:05:00.655 "nvmf_subsystem_remove_listener", 00:05:00.655 "nvmf_subsystem_add_listener", 00:05:00.655 "nvmf_delete_subsystem", 00:05:00.655 "nvmf_create_subsystem", 00:05:00.655 "nvmf_get_subsystems", 00:05:00.655 "env_dpdk_get_mem_stats", 00:05:00.655 "nbd_get_disks", 00:05:00.655 "nbd_stop_disk", 00:05:00.655 "nbd_start_disk", 00:05:00.655 "ublk_recover_disk", 00:05:00.655 "ublk_get_disks", 00:05:00.655 "ublk_stop_disk", 00:05:00.655 "ublk_start_disk", 00:05:00.655 "ublk_destroy_target", 00:05:00.655 "ublk_create_target", 00:05:00.655 "virtio_blk_create_transport", 00:05:00.655 "virtio_blk_get_transports", 00:05:00.655 "vhost_controller_set_coalescing", 00:05:00.655 "vhost_get_controllers", 00:05:00.655 "vhost_delete_controller", 00:05:00.655 "vhost_create_blk_controller", 00:05:00.655 "vhost_scsi_controller_remove_target", 00:05:00.655 "vhost_scsi_controller_add_target", 00:05:00.655 "vhost_start_scsi_controller", 00:05:00.655 "vhost_create_scsi_controller", 00:05:00.655 "thread_set_cpumask", 00:05:00.655 "framework_get_governor", 00:05:00.655 "framework_get_scheduler", 00:05:00.655 "framework_set_scheduler", 00:05:00.655 "framework_get_reactors", 00:05:00.655 "thread_get_io_channels", 00:05:00.655 "thread_get_pollers", 00:05:00.655 "thread_get_stats", 00:05:00.655 "framework_monitor_context_switch", 00:05:00.655 "spdk_kill_instance", 00:05:00.655 "log_enable_timestamps", 00:05:00.655 "log_get_flags", 00:05:00.655 "log_clear_flag", 00:05:00.655 "log_set_flag", 00:05:00.655 "log_get_level", 00:05:00.655 "log_set_level", 00:05:00.655 "log_get_print_level", 00:05:00.655 "log_set_print_level", 00:05:00.655 "framework_enable_cpumask_locks", 00:05:00.655 "framework_disable_cpumask_locks", 00:05:00.655 "framework_wait_init", 00:05:00.655 "framework_start_init", 00:05:00.655 "scsi_get_devices", 00:05:00.655 "bdev_get_histogram", 00:05:00.655 "bdev_enable_histogram", 00:05:00.655 "bdev_set_qos_limit", 00:05:00.655 "bdev_set_qd_sampling_period", 00:05:00.655 "bdev_get_bdevs", 00:05:00.655 "bdev_reset_iostat", 00:05:00.655 "bdev_get_iostat", 00:05:00.655 "bdev_examine", 00:05:00.655 "bdev_wait_for_examine", 00:05:00.655 "bdev_set_options", 00:05:00.655 "notify_get_notifications", 00:05:00.655 "notify_get_types", 00:05:00.655 "accel_get_stats", 00:05:00.655 "accel_set_options", 00:05:00.655 "accel_set_driver", 00:05:00.655 "accel_crypto_key_destroy", 00:05:00.655 "accel_crypto_keys_get", 00:05:00.655 "accel_crypto_key_create", 00:05:00.655 "accel_assign_opc", 00:05:00.655 "accel_get_module_info", 00:05:00.655 "accel_get_opc_assignments", 00:05:00.655 "vmd_rescan", 00:05:00.655 "vmd_remove_device", 00:05:00.655 "vmd_enable", 00:05:00.655 "sock_get_default_impl", 00:05:00.655 "sock_set_default_impl", 00:05:00.655 "sock_impl_set_options", 00:05:00.655 "sock_impl_get_options", 00:05:00.655 "iobuf_get_stats", 00:05:00.655 "iobuf_set_options", 00:05:00.655 "keyring_get_keys", 00:05:00.655 "framework_get_pci_devices", 00:05:00.655 "framework_get_config", 00:05:00.655 "framework_get_subsystems", 00:05:00.655 "vfu_tgt_set_base_path", 00:05:00.655 "trace_get_info", 00:05:00.655 "trace_get_tpoint_group_mask", 00:05:00.655 "trace_disable_tpoint_group", 00:05:00.655 "trace_enable_tpoint_group", 00:05:00.655 "trace_clear_tpoint_mask", 00:05:00.655 "trace_set_tpoint_mask", 00:05:00.655 "spdk_get_version", 00:05:00.655 "rpc_get_methods" 00:05:00.655 ] 00:05:00.655 20:30:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.655 20:30:35 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.655 20:30:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.655 20:30:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.655 20:30:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2511310 00:05:00.655 20:30:35 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2511310 ']' 00:05:00.655 20:30:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2511310 00:05:00.655 20:30:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:00.655 20:30:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.655 20:30:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2511310 00:05:00.656 20:30:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.656 20:30:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.656 20:30:35 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2511310' 00:05:00.656 killing process with pid 2511310 00:05:00.656 20:30:35 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2511310 00:05:00.656 20:30:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2511310 00:05:01.223 00:05:01.223 real 0m1.529s 00:05:01.223 user 0m2.887s 00:05:01.223 sys 0m0.429s 00:05:01.223 20:30:35 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.223 20:30:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.223 ************************************ 00:05:01.223 END TEST spdkcli_tcp 00:05:01.223 ************************************ 00:05:01.223 20:30:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.223 20:30:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.223 20:30:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.223 20:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.223 20:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:01.223 ************************************ 00:05:01.223 START TEST dpdk_mem_utility 00:05:01.223 ************************************ 00:05:01.223 20:30:35 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.223 * Looking for test storage... 00:05:01.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:01.223 20:30:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:01.223 20:30:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2511609 00:05:01.223 20:30:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.223 20:30:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2511609 00:05:01.223 20:30:35 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2511609 ']' 00:05:01.223 20:30:35 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.223 20:30:35 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.224 20:30:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.224 20:30:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.224 20:30:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.224 [2024-07-15 20:30:35.627916] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:01.224 [2024-07-15 20:30:35.627964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511609 ] 00:05:01.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.224 [2024-07-15 20:30:35.679701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.482 [2024-07-15 20:30:35.754922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.047 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.047 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:02.047 20:30:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:02.047 20:30:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:02.047 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.047 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.047 { 00:05:02.047 "filename": "/tmp/spdk_mem_dump.txt" 00:05:02.047 } 00:05:02.047 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.047 20:30:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.047 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:02.047 1 heaps totaling size 814.000000 MiB 00:05:02.047 size: 814.000000 MiB heap id: 0 00:05:02.047 end heaps---------- 00:05:02.047 8 mempools totaling size 598.116089 MiB 00:05:02.047 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:02.047 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:02.047 size: 84.521057 MiB name: bdev_io_2511609 00:05:02.047 size: 51.011292 MiB name: evtpool_2511609 00:05:02.047 size: 50.003479 MiB name: msgpool_2511609 00:05:02.047 size: 21.763794 MiB name: PDU_Pool 00:05:02.047 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:02.047 size: 0.026123 MiB name: Session_Pool 00:05:02.047 end mempools------- 00:05:02.047 6 memzones totaling size 4.142822 MiB 00:05:02.047 size: 1.000366 MiB name: RG_ring_0_2511609 00:05:02.047 size: 1.000366 MiB name: RG_ring_1_2511609 00:05:02.047 size: 1.000366 MiB name: RG_ring_4_2511609 00:05:02.047 size: 1.000366 MiB name: RG_ring_5_2511609 00:05:02.047 size: 0.125366 MiB name: RG_ring_2_2511609 00:05:02.047 size: 0.015991 MiB name: RG_ring_3_2511609 00:05:02.047 end memzones------- 00:05:02.047 20:30:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:02.306 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:02.306 list of free elements. size: 12.519348 MiB 00:05:02.306 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:02.306 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:02.307 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:02.307 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:02.307 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:02.307 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:02.307 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:02.307 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:02.307 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:02.307 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:02.307 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:02.307 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:02.307 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:02.307 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:02.307 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:02.307 list of standard malloc elements. size: 199.218079 MiB 00:05:02.307 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:02.307 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:02.307 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:02.307 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:02.307 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:02.307 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:02.307 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:02.307 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:02.307 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:02.307 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:02.307 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:02.307 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:02.307 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:02.307 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:02.307 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:02.307 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:02.307 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:02.307 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:02.307 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:02.307 list of memzone associated elements. size: 602.262573 MiB 00:05:02.307 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:02.307 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:02.307 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:02.307 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:02.307 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:02.307 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2511609_0 00:05:02.307 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:02.307 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2511609_0 00:05:02.307 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:02.307 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2511609_0 00:05:02.307 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:02.307 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:02.307 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:02.307 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:02.307 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:02.307 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2511609 00:05:02.307 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:02.307 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2511609 00:05:02.307 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:02.307 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2511609 00:05:02.307 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:02.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:02.307 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:02.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:02.307 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:02.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:02.307 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:02.307 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:02.307 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:02.307 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2511609 00:05:02.307 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:02.307 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2511609 00:05:02.307 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:02.307 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2511609 00:05:02.307 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:02.307 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2511609 00:05:02.307 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:02.307 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2511609 00:05:02.307 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:02.307 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:02.307 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:02.307 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:02.307 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:02.307 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:02.307 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:02.307 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2511609 00:05:02.307 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:02.307 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:02.307 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:02.307 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:02.307 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:02.307 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2511609 00:05:02.307 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:02.307 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:02.307 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:02.307 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2511609 00:05:02.307 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:02.307 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2511609 00:05:02.307 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:02.307 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:02.307 20:30:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:02.307 20:30:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2511609 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2511609 ']' 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2511609 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2511609 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2511609' 00:05:02.307 killing process with pid 2511609 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2511609 00:05:02.307 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2511609 00:05:02.567 00:05:02.567 real 0m1.379s 00:05:02.567 user 0m1.446s 00:05:02.567 sys 0m0.383s 00:05:02.567 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.567 20:30:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.567 ************************************ 00:05:02.567 END TEST dpdk_mem_utility 00:05:02.567 ************************************ 00:05:02.567 20:30:36 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.567 20:30:36 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:02.567 20:30:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.567 20:30:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.567 20:30:36 -- common/autotest_common.sh@10 -- # set +x 00:05:02.567 ************************************ 00:05:02.567 START TEST event 00:05:02.567 ************************************ 00:05:02.567 20:30:36 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:02.567 * Looking for test storage... 00:05:02.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:02.567 20:30:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:02.567 20:30:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.567 20:30:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.567 20:30:37 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:02.567 20:30:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.567 20:30:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.826 ************************************ 00:05:02.826 START TEST event_perf 00:05:02.826 ************************************ 00:05:02.826 20:30:37 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.826 Running I/O for 1 seconds...[2024-07-15 20:30:37.080679] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:02.826 [2024-07-15 20:30:37.080745] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511902 ] 00:05:02.826 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.826 [2024-07-15 20:30:37.139046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.826 [2024-07-15 20:30:37.215335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.826 [2024-07-15 20:30:37.215430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.826 [2024-07-15 20:30:37.215523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.826 [2024-07-15 20:30:37.215525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.202 Running I/O for 1 seconds... 00:05:04.202 lcore 0: 205784 00:05:04.202 lcore 1: 205783 00:05:04.202 lcore 2: 205782 00:05:04.202 lcore 3: 205784 00:05:04.202 done. 00:05:04.202 00:05:04.202 real 0m1.226s 00:05:04.202 user 0m4.152s 00:05:04.202 sys 0m0.070s 00:05:04.202 20:30:38 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.202 20:30:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.203 ************************************ 00:05:04.203 END TEST event_perf 00:05:04.203 ************************************ 00:05:04.203 20:30:38 event -- common/autotest_common.sh@1142 -- # return 0 00:05:04.203 20:30:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:04.203 20:30:38 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:04.203 20:30:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.203 20:30:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.203 ************************************ 00:05:04.203 START TEST event_reactor 00:05:04.203 ************************************ 00:05:04.203 20:30:38 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:04.203 [2024-07-15 20:30:38.376077] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:04.203 [2024-07-15 20:30:38.376144] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512154 ] 00:05:04.203 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.203 [2024-07-15 20:30:38.434180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.203 [2024-07-15 20:30:38.505701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.138 test_start 00:05:05.138 oneshot 00:05:05.138 tick 100 00:05:05.138 tick 100 00:05:05.138 tick 250 00:05:05.138 tick 100 00:05:05.138 tick 100 00:05:05.138 tick 250 00:05:05.138 tick 100 00:05:05.138 tick 500 00:05:05.138 tick 100 00:05:05.138 tick 100 00:05:05.138 tick 250 00:05:05.138 tick 100 00:05:05.138 tick 100 00:05:05.138 test_end 00:05:05.138 00:05:05.138 real 0m1.219s 00:05:05.138 user 0m1.143s 00:05:05.138 sys 0m0.072s 00:05:05.138 20:30:39 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.138 20:30:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:05.138 ************************************ 00:05:05.138 END TEST event_reactor 00:05:05.138 ************************************ 00:05:05.138 20:30:39 event -- common/autotest_common.sh@1142 -- # return 0 00:05:05.138 20:30:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.138 20:30:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:05.138 20:30:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.138 20:30:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.397 ************************************ 00:05:05.397 START TEST event_reactor_perf 00:05:05.397 ************************************ 00:05:05.397 20:30:39 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.397 [2024-07-15 20:30:39.652214] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:05.397 [2024-07-15 20:30:39.652282] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512414 ] 00:05:05.397 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.397 [2024-07-15 20:30:39.707630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.397 [2024-07-15 20:30:39.774935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.771 test_start 00:05:06.771 test_end 00:05:06.771 Performance: 506971 events per second 00:05:06.771 00:05:06.771 real 0m1.209s 00:05:06.771 user 0m1.141s 00:05:06.771 sys 0m0.063s 00:05:06.771 20:30:40 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.771 20:30:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.771 ************************************ 00:05:06.771 END TEST event_reactor_perf 00:05:06.771 ************************************ 00:05:06.771 20:30:40 event -- common/autotest_common.sh@1142 -- # return 0 00:05:06.771 20:30:40 event -- event/event.sh@49 -- # uname -s 00:05:06.771 20:30:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.771 20:30:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.771 20:30:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.771 20:30:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.771 20:30:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.771 ************************************ 00:05:06.771 START TEST event_scheduler 00:05:06.771 ************************************ 00:05:06.771 20:30:40 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.771 * Looking for test storage... 00:05:06.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:06.771 20:30:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.771 20:30:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2512694 00:05:06.771 20:30:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.771 20:30:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.771 20:30:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2512694 00:05:06.771 20:30:41 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2512694 ']' 00:05:06.771 20:30:41 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.771 20:30:41 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.771 20:30:41 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.771 20:30:41 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.771 20:30:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.771 [2024-07-15 20:30:41.051821] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:06.771 [2024-07-15 20:30:41.051861] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512694 ] 00:05:06.771 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.771 [2024-07-15 20:30:41.106082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.771 [2024-07-15 20:30:41.180938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.771 [2024-07-15 20:30:41.181025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.771 [2024-07-15 20:30:41.181110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.771 [2024-07-15 20:30:41.181111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:07.705 20:30:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.705 [2024-07-15 20:30:41.875542] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:07.705 [2024-07-15 20:30:41.875562] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:07.705 [2024-07-15 20:30:41.875571] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:07.705 [2024-07-15 20:30:41.875577] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:07.705 [2024-07-15 20:30:41.875582] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.705 20:30:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.705 [2024-07-15 20:30:41.948536] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.705 20:30:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.705 20:30:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.705 ************************************ 00:05:07.705 START TEST scheduler_create_thread 00:05:07.705 ************************************ 00:05:07.705 20:30:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:07.705 20:30:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.705 20:30:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.705 20:30:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.705 2 00:05:07.705 20:30:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.705 20:30:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.705 20:30:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.705 20:30:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.705 3 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.705 4 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.705 5 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.705 6 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.705 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.706 7 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.706 8 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.706 9 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.706 10 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.706 20:30:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.077 20:30:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.077 20:30:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.077 20:30:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.077 20:30:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.077 20:30:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.452 20:30:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.452 00:05:10.452 real 0m2.619s 00:05:10.452 user 0m0.016s 00:05:10.452 sys 0m0.003s 00:05:10.452 20:30:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.452 20:30:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.452 ************************************ 00:05:10.452 END TEST scheduler_create_thread 00:05:10.452 ************************************ 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:10.452 20:30:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.452 20:30:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2512694 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2512694 ']' 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2512694 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2512694 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2512694' 00:05:10.452 killing process with pid 2512694 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2512694 00:05:10.452 20:30:44 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2512694 00:05:10.711 [2024-07-15 20:30:45.078643] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.970 00:05:10.970 real 0m4.348s 00:05:10.970 user 0m8.256s 00:05:10.970 sys 0m0.358s 00:05:10.970 20:30:45 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.970 20:30:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.970 ************************************ 00:05:10.970 END TEST event_scheduler 00:05:10.970 ************************************ 00:05:10.970 20:30:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:10.970 20:30:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.970 20:30:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.970 20:30:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.970 20:30:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.970 20:30:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.970 ************************************ 00:05:10.970 START TEST app_repeat 00:05:10.970 ************************************ 00:05:10.970 20:30:45 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2513439 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2513439' 00:05:10.970 Process app_repeat pid: 2513439 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.970 spdk_app_start Round 0 00:05:10.970 20:30:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2513439 /var/tmp/spdk-nbd.sock 00:05:10.970 20:30:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2513439 ']' 00:05:10.970 20:30:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.970 20:30:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.970 20:30:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.970 20:30:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.970 20:30:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.970 [2024-07-15 20:30:45.378359] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:10.970 [2024-07-15 20:30:45.378409] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513439 ] 00:05:10.970 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.970 [2024-07-15 20:30:45.432548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.230 [2024-07-15 20:30:45.513482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.230 [2024-07-15 20:30:45.513486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.230 20:30:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.230 20:30:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:11.230 20:30:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.489 Malloc0 00:05:11.489 20:30:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.748 Malloc1 00:05:11.748 20:30:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.748 20:30:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.748 /dev/nbd0 00:05:11.748 20:30:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.748 20:30:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.748 1+0 records in 00:05:11.748 1+0 records out 00:05:11.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233985 s, 17.5 MB/s 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.748 20:30:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:11.748 20:30:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.748 20:30:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.748 20:30:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.007 /dev/nbd1 00:05:12.007 20:30:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.007 20:30:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.007 1+0 records in 00:05:12.007 1+0 records out 00:05:12.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183884 s, 22.3 MB/s 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:12.007 20:30:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:12.007 20:30:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.007 20:30:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.007 20:30:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.007 20:30:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.007 20:30:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.267 { 00:05:12.267 "nbd_device": "/dev/nbd0", 00:05:12.267 "bdev_name": "Malloc0" 00:05:12.267 }, 00:05:12.267 { 00:05:12.267 "nbd_device": "/dev/nbd1", 00:05:12.267 "bdev_name": "Malloc1" 00:05:12.267 } 00:05:12.267 ]' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.267 { 00:05:12.267 "nbd_device": "/dev/nbd0", 00:05:12.267 "bdev_name": "Malloc0" 00:05:12.267 }, 00:05:12.267 { 00:05:12.267 "nbd_device": "/dev/nbd1", 00:05:12.267 "bdev_name": "Malloc1" 00:05:12.267 } 00:05:12.267 ]' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.267 /dev/nbd1' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.267 /dev/nbd1' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.267 256+0 records in 00:05:12.267 256+0 records out 00:05:12.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103257 s, 102 MB/s 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.267 256+0 records in 00:05:12.267 256+0 records out 00:05:12.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132479 s, 79.2 MB/s 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.267 256+0 records in 00:05:12.267 256+0 records out 00:05:12.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141257 s, 74.2 MB/s 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.267 20:30:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.527 20:30:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.786 20:30:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.045 20:30:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.045 20:30:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.045 20:30:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.304 [2024-07-15 20:30:47.682300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.304 [2024-07-15 20:30:47.748690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.304 [2024-07-15 20:30:47.748693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.563 [2024-07-15 20:30:47.789622] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.563 [2024-07-15 20:30:47.789659] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.123 20:30:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.123 20:30:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:16.123 spdk_app_start Round 1 00:05:16.123 20:30:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2513439 /var/tmp/spdk-nbd.sock 00:05:16.123 20:30:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2513439 ']' 00:05:16.123 20:30:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.123 20:30:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.123 20:30:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.123 20:30:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.123 20:30:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.381 20:30:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.381 20:30:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:16.382 20:30:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.382 Malloc0 00:05:16.640 20:30:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.640 Malloc1 00:05:16.640 20:30:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.640 20:30:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.898 /dev/nbd0 00:05:16.898 20:30:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.898 20:30:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.898 1+0 records in 00:05:16.898 1+0 records out 00:05:16.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020955 s, 19.5 MB/s 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.898 20:30:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:16.898 20:30:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.898 20:30:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.898 20:30:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.156 /dev/nbd1 00:05:17.156 20:30:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.156 20:30:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.156 1+0 records in 00:05:17.156 1+0 records out 00:05:17.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213723 s, 19.2 MB/s 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.156 20:30:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.156 20:30:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.156 20:30:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.156 20:30:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.156 20:30:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.156 20:30:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.414 { 00:05:17.414 "nbd_device": "/dev/nbd0", 00:05:17.414 "bdev_name": "Malloc0" 00:05:17.414 }, 00:05:17.414 { 00:05:17.414 "nbd_device": "/dev/nbd1", 00:05:17.414 "bdev_name": "Malloc1" 00:05:17.414 } 00:05:17.414 ]' 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.414 { 00:05:17.414 "nbd_device": "/dev/nbd0", 00:05:17.414 "bdev_name": "Malloc0" 00:05:17.414 }, 00:05:17.414 { 00:05:17.414 "nbd_device": "/dev/nbd1", 00:05:17.414 "bdev_name": "Malloc1" 00:05:17.414 } 00:05:17.414 ]' 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.414 /dev/nbd1' 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.414 /dev/nbd1' 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.414 20:30:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.415 256+0 records in 00:05:17.415 256+0 records out 00:05:17.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01044 s, 100 MB/s 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.415 256+0 records in 00:05:17.415 256+0 records out 00:05:17.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141273 s, 74.2 MB/s 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.415 256+0 records in 00:05:17.415 256+0 records out 00:05:17.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147801 s, 70.9 MB/s 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.415 20:30:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.673 20:30:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.931 20:30:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.932 20:30:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.190 20:30:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.190 20:30:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.190 20:30:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.448 [2024-07-15 20:30:52.812927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.448 [2024-07-15 20:30:52.880889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.448 [2024-07-15 20:30:52.880891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.448 [2024-07-15 20:30:52.922407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.448 [2024-07-15 20:30:52.922449] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.733 20:30:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.733 20:30:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.733 spdk_app_start Round 2 00:05:21.733 20:30:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2513439 /var/tmp/spdk-nbd.sock 00:05:21.733 20:30:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2513439 ']' 00:05:21.733 20:30:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.733 20:30:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.733 20:30:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.733 20:30:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.733 20:30:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.733 20:30:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.733 20:30:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:21.733 20:30:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.733 Malloc0 00:05:21.733 20:30:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.733 Malloc1 00:05:21.733 20:30:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.733 20:30:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.991 /dev/nbd0 00:05:21.991 20:30:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.991 20:30:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:21.991 20:30:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.991 1+0 records in 00:05:21.991 1+0 records out 00:05:21.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015323 s, 26.7 MB/s 00:05:21.992 20:30:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.992 20:30:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:21.992 20:30:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.992 20:30:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:21.992 20:30:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:21.992 20:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.992 20:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.992 20:30:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.249 /dev/nbd1 00:05:22.250 20:30:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.250 20:30:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.250 1+0 records in 00:05:22.250 1+0 records out 00:05:22.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187043 s, 21.9 MB/s 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.250 20:30:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:22.250 20:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.250 20:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.250 20:30:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.250 20:30:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.250 20:30:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.508 20:30:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.508 { 00:05:22.508 "nbd_device": "/dev/nbd0", 00:05:22.508 "bdev_name": "Malloc0" 00:05:22.508 }, 00:05:22.508 { 00:05:22.508 "nbd_device": "/dev/nbd1", 00:05:22.508 "bdev_name": "Malloc1" 00:05:22.509 } 00:05:22.509 ]' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.509 { 00:05:22.509 "nbd_device": "/dev/nbd0", 00:05:22.509 "bdev_name": "Malloc0" 00:05:22.509 }, 00:05:22.509 { 00:05:22.509 "nbd_device": "/dev/nbd1", 00:05:22.509 "bdev_name": "Malloc1" 00:05:22.509 } 00:05:22.509 ]' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.509 /dev/nbd1' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.509 /dev/nbd1' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.509 256+0 records in 00:05:22.509 256+0 records out 00:05:22.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010347 s, 101 MB/s 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.509 256+0 records in 00:05:22.509 256+0 records out 00:05:22.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137245 s, 76.4 MB/s 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.509 256+0 records in 00:05:22.509 256+0 records out 00:05:22.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147137 s, 71.3 MB/s 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.509 20:30:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.768 20:30:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.027 20:30:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.027 20:30:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.286 20:30:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.545 [2024-07-15 20:30:57.849477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.545 [2024-07-15 20:30:57.915710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.545 [2024-07-15 20:30:57.915713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.545 [2024-07-15 20:30:57.956539] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.545 [2024-07-15 20:30:57.956576] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.827 20:31:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2513439 /var/tmp/spdk-nbd.sock 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2513439 ']' 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:26.827 20:31:00 event.app_repeat -- event/event.sh@39 -- # killprocess 2513439 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2513439 ']' 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2513439 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2513439 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2513439' 00:05:26.827 killing process with pid 2513439 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2513439 00:05:26.827 20:31:00 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2513439 00:05:26.827 spdk_app_start is called in Round 0. 00:05:26.827 Shutdown signal received, stop current app iteration 00:05:26.827 Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 reinitialization... 00:05:26.827 spdk_app_start is called in Round 1. 00:05:26.827 Shutdown signal received, stop current app iteration 00:05:26.827 Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 reinitialization... 00:05:26.827 spdk_app_start is called in Round 2. 00:05:26.827 Shutdown signal received, stop current app iteration 00:05:26.827 Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 reinitialization... 00:05:26.827 spdk_app_start is called in Round 3. 00:05:26.827 Shutdown signal received, stop current app iteration 00:05:26.827 20:31:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:26.827 20:31:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:26.827 00:05:26.827 real 0m15.703s 00:05:26.827 user 0m34.016s 00:05:26.827 sys 0m2.341s 00:05:26.827 20:31:01 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.827 20:31:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.827 ************************************ 00:05:26.827 END TEST app_repeat 00:05:26.827 ************************************ 00:05:26.827 20:31:01 event -- common/autotest_common.sh@1142 -- # return 0 00:05:26.827 20:31:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:26.827 20:31:01 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:26.827 20:31:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.827 20:31:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.827 20:31:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.827 ************************************ 00:05:26.827 START TEST cpu_locks 00:05:26.827 ************************************ 00:05:26.827 20:31:01 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:26.827 * Looking for test storage... 00:05:26.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:26.827 20:31:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:26.827 20:31:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:26.827 20:31:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:26.827 20:31:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:26.827 20:31:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.827 20:31:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.827 20:31:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.827 ************************************ 00:05:26.827 START TEST default_locks 00:05:26.827 ************************************ 00:05:26.827 20:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:26.827 20:31:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.827 20:31:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2516415 00:05:26.827 20:31:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2516415 00:05:26.827 20:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2516415 ']' 00:05:26.827 20:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.827 20:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.827 20:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.828 20:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.828 20:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.828 [2024-07-15 20:31:01.271957] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:26.828 [2024-07-15 20:31:01.272002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516415 ] 00:05:26.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.086 [2024-07-15 20:31:01.326752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.086 [2024-07-15 20:31:01.405938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.653 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.653 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:27.653 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2516415 00:05:27.653 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2516415 00:05:27.653 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.913 lslocks: write error 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2516415 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2516415 ']' 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2516415 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2516415 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2516415' 00:05:27.913 killing process with pid 2516415 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2516415 00:05:27.913 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2516415 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2516415 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2516415 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2516415 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2516415 ']' 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2516415) - No such process 00:05:28.172 ERROR: process (pid: 2516415) is no longer running 00:05:28.172 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.173 00:05:28.173 real 0m1.339s 00:05:28.173 user 0m1.407s 00:05:28.173 sys 0m0.413s 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.173 20:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.173 ************************************ 00:05:28.173 END TEST default_locks 00:05:28.173 ************************************ 00:05:28.173 20:31:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.173 20:31:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:28.173 20:31:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.173 20:31:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.173 20:31:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.173 ************************************ 00:05:28.173 START TEST default_locks_via_rpc 00:05:28.173 ************************************ 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2516680 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2516680 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2516680 ']' 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.173 20:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.434 [2024-07-15 20:31:02.694313] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:28.435 [2024-07-15 20:31:02.694351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516680 ] 00:05:28.435 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.435 [2024-07-15 20:31:02.745404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.435 [2024-07-15 20:31:02.818121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.002 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.002 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.002 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.003 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.003 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2516680 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2516680 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2516680 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2516680 ']' 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2516680 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2516680 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2516680' 00:05:29.260 killing process with pid 2516680 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2516680 00:05:29.260 20:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2516680 00:05:29.828 00:05:29.828 real 0m1.396s 00:05:29.828 user 0m1.458s 00:05:29.828 sys 0m0.432s 00:05:29.828 20:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.828 20:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.828 ************************************ 00:05:29.828 END TEST default_locks_via_rpc 00:05:29.828 ************************************ 00:05:29.828 20:31:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:29.828 20:31:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:29.828 20:31:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.828 20:31:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.828 20:31:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.828 ************************************ 00:05:29.828 START TEST non_locking_app_on_locked_coremask 00:05:29.828 ************************************ 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2516939 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2516939 /var/tmp/spdk.sock 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2516939 ']' 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.828 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.828 [2024-07-15 20:31:04.151674] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:29.828 [2024-07-15 20:31:04.151714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516939 ] 00:05:29.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.828 [2024-07-15 20:31:04.203555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.828 [2024-07-15 20:31:04.282770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2516957 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2516957 /var/tmp/spdk2.sock 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2516957 ']' 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.766 20:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.766 [2024-07-15 20:31:05.010152] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:30.766 [2024-07-15 20:31:05.010200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516957 ] 00:05:30.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.766 [2024-07-15 20:31:05.084596] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.766 [2024-07-15 20:31:05.084628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.766 [2024-07-15 20:31:05.238163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.335 20:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.335 20:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:31.335 20:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2516939 00:05:31.335 20:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2516939 00:05:31.335 20:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.272 lslocks: write error 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2516939 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2516939 ']' 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2516939 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2516939 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2516939' 00:05:32.272 killing process with pid 2516939 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2516939 00:05:32.272 20:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2516939 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2516957 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2516957 ']' 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2516957 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2516957 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2516957' 00:05:32.840 killing process with pid 2516957 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2516957 00:05:32.840 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2516957 00:05:33.099 00:05:33.099 real 0m3.293s 00:05:33.099 user 0m3.535s 00:05:33.099 sys 0m0.940s 00:05:33.099 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.099 20:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.099 ************************************ 00:05:33.099 END TEST non_locking_app_on_locked_coremask 00:05:33.099 ************************************ 00:05:33.099 20:31:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:33.099 20:31:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:33.099 20:31:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.099 20:31:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.099 20:31:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.099 ************************************ 00:05:33.099 START TEST locking_app_on_unlocked_coremask 00:05:33.099 ************************************ 00:05:33.099 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:33.099 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2517454 00:05:33.100 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2517454 /var/tmp/spdk.sock 00:05:33.100 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:33.100 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2517454 ']' 00:05:33.100 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.100 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.100 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.100 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.100 20:31:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.100 [2024-07-15 20:31:07.508336] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:33.100 [2024-07-15 20:31:07.508377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517454 ] 00:05:33.100 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.100 [2024-07-15 20:31:07.560293] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.100 [2024-07-15 20:31:07.560323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.358 [2024-07-15 20:31:07.629751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2517678 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2517678 /var/tmp/spdk2.sock 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2517678 ']' 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.925 20:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.925 [2024-07-15 20:31:08.349011] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:33.925 [2024-07-15 20:31:08.349061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517678 ] 00:05:33.925 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.184 [2024-07-15 20:31:08.426037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.184 [2024-07-15 20:31:08.571922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.751 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.751 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:34.751 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2517678 00:05:34.751 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2517678 00:05:34.751 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.320 lslocks: write error 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2517454 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2517454 ']' 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2517454 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2517454 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2517454' 00:05:35.320 killing process with pid 2517454 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2517454 00:05:35.320 20:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2517454 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2517678 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2517678 ']' 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2517678 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2517678 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2517678' 00:05:35.888 killing process with pid 2517678 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2517678 00:05:35.888 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2517678 00:05:36.175 00:05:36.175 real 0m3.127s 00:05:36.175 user 0m3.357s 00:05:36.175 sys 0m0.872s 00:05:36.175 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.175 20:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.175 ************************************ 00:05:36.175 END TEST locking_app_on_unlocked_coremask 00:05:36.175 ************************************ 00:05:36.175 20:31:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:36.175 20:31:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:36.175 20:31:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.175 20:31:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.175 20:31:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.436 ************************************ 00:05:36.436 START TEST locking_app_on_locked_coremask 00:05:36.436 ************************************ 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2518068 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2518068 /var/tmp/spdk.sock 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2518068 ']' 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.436 20:31:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.436 [2024-07-15 20:31:10.701419] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:36.436 [2024-07-15 20:31:10.701460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518068 ] 00:05:36.436 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.436 [2024-07-15 20:31:10.754679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.436 [2024-07-15 20:31:10.834578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2518187 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2518187 /var/tmp/spdk2.sock 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2518187 /var/tmp/spdk2.sock 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2518187 /var/tmp/spdk2.sock 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2518187 ']' 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.374 20:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.374 [2024-07-15 20:31:11.539017] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:37.374 [2024-07-15 20:31:11.539062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518187 ] 00:05:37.374 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.374 [2024-07-15 20:31:11.610165] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2518068 has claimed it. 00:05:37.374 [2024-07-15 20:31:11.610194] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2518187) - No such process 00:05:37.942 ERROR: process (pid: 2518187) is no longer running 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2518068 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2518068 00:05:37.942 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.201 lslocks: write error 00:05:38.201 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2518068 00:05:38.201 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2518068 ']' 00:05:38.201 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2518068 00:05:38.201 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:38.201 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.201 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2518068 00:05:38.202 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.202 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.202 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2518068' 00:05:38.202 killing process with pid 2518068 00:05:38.202 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2518068 00:05:38.202 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2518068 00:05:38.461 00:05:38.461 real 0m2.201s 00:05:38.461 user 0m2.444s 00:05:38.461 sys 0m0.553s 00:05:38.461 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.461 20:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.461 ************************************ 00:05:38.461 END TEST locking_app_on_locked_coremask 00:05:38.461 ************************************ 00:05:38.461 20:31:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:38.461 20:31:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:38.461 20:31:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.461 20:31:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.461 20:31:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.461 ************************************ 00:05:38.461 START TEST locking_overlapped_coremask 00:05:38.461 ************************************ 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2518448 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2518448 /var/tmp/spdk.sock 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2518448 ']' 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.461 20:31:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.720 [2024-07-15 20:31:12.974555] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:38.720 [2024-07-15 20:31:12.974594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518448 ] 00:05:38.720 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.720 [2024-07-15 20:31:13.027497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.720 [2024-07-15 20:31:13.108676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.720 [2024-07-15 20:31:13.108770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.720 [2024-07-15 20:31:13.108770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2518678 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2518678 /var/tmp/spdk2.sock 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2518678 /var/tmp/spdk2.sock 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2518678 /var/tmp/spdk2.sock 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2518678 ']' 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.664 20:31:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.664 [2024-07-15 20:31:13.835394] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:39.664 [2024-07-15 20:31:13.835441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518678 ] 00:05:39.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.664 [2024-07-15 20:31:13.912137] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2518448 has claimed it. 00:05:39.664 [2024-07-15 20:31:13.912174] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2518678) - No such process 00:05:40.232 ERROR: process (pid: 2518678) is no longer running 00:05:40.232 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.232 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:40.232 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:40.232 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.232 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.232 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2518448 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2518448 ']' 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2518448 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2518448 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2518448' 00:05:40.233 killing process with pid 2518448 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2518448 00:05:40.233 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2518448 00:05:40.492 00:05:40.492 real 0m1.894s 00:05:40.492 user 0m5.356s 00:05:40.492 sys 0m0.406s 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.492 ************************************ 00:05:40.492 END TEST locking_overlapped_coremask 00:05:40.492 ************************************ 00:05:40.492 20:31:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:40.492 20:31:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:40.492 20:31:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.492 20:31:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.492 20:31:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.492 ************************************ 00:05:40.492 START TEST locking_overlapped_coremask_via_rpc 00:05:40.492 ************************************ 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2518874 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2518874 /var/tmp/spdk.sock 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2518874 ']' 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.492 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.493 20:31:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.493 [2024-07-15 20:31:14.938112] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:40.493 [2024-07-15 20:31:14.938153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518874 ] 00:05:40.493 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.752 [2024-07-15 20:31:14.991904] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.752 [2024-07-15 20:31:14.991934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.752 [2024-07-15 20:31:15.066077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.752 [2024-07-15 20:31:15.066173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.752 [2024-07-15 20:31:15.066174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2518952 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2518952 /var/tmp/spdk2.sock 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2518952 ']' 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.321 20:31:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.321 [2024-07-15 20:31:15.777938] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:41.321 [2024-07-15 20:31:15.777988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518952 ] 00:05:41.321 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.580 [2024-07-15 20:31:15.855500] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.580 [2024-07-15 20:31:15.855531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.580 [2024-07-15 20:31:16.007285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.580 [2024-07-15 20:31:16.007404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.580 [2024-07-15 20:31:16.007405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:42.148 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.148 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.148 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.148 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.148 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.408 [2024-07-15 20:31:16.648298] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2518874 has claimed it. 00:05:42.408 request: 00:05:42.408 { 00:05:42.408 "method": "framework_enable_cpumask_locks", 00:05:42.408 "req_id": 1 00:05:42.408 } 00:05:42.408 Got JSON-RPC error response 00:05:42.408 response: 00:05:42.408 { 00:05:42.408 "code": -32603, 00:05:42.408 "message": "Failed to claim CPU core: 2" 00:05:42.408 } 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2518874 /var/tmp/spdk.sock 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2518874 ']' 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2518952 /var/tmp/spdk2.sock 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2518952 ']' 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.408 20:31:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.667 20:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.667 20:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.667 20:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:42.667 20:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.667 20:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.667 20:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.667 00:05:42.667 real 0m2.146s 00:05:42.667 user 0m0.921s 00:05:42.667 sys 0m0.152s 00:05:42.667 20:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.667 20:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.667 ************************************ 00:05:42.667 END TEST locking_overlapped_coremask_via_rpc 00:05:42.667 ************************************ 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:42.667 20:31:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:42.667 20:31:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2518874 ]] 00:05:42.667 20:31:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2518874 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2518874 ']' 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2518874 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2518874 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2518874' 00:05:42.667 killing process with pid 2518874 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2518874 00:05:42.667 20:31:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2518874 00:05:43.236 20:31:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2518952 ]] 00:05:43.236 20:31:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2518952 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2518952 ']' 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2518952 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2518952 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2518952' 00:05:43.236 killing process with pid 2518952 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2518952 00:05:43.236 20:31:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2518952 00:05:43.495 20:31:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.495 20:31:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:43.495 20:31:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2518874 ]] 00:05:43.495 20:31:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2518874 00:05:43.495 20:31:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2518874 ']' 00:05:43.495 20:31:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2518874 00:05:43.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2518874) - No such process 00:05:43.495 20:31:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2518874 is not found' 00:05:43.495 Process with pid 2518874 is not found 00:05:43.495 20:31:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2518952 ]] 00:05:43.495 20:31:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2518952 00:05:43.495 20:31:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2518952 ']' 00:05:43.495 20:31:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2518952 00:05:43.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2518952) - No such process 00:05:43.495 20:31:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2518952 is not found' 00:05:43.495 Process with pid 2518952 is not found 00:05:43.495 20:31:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.495 00:05:43.495 real 0m16.681s 00:05:43.495 user 0m29.260s 00:05:43.495 sys 0m4.655s 00:05:43.495 20:31:17 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.495 20:31:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.495 ************************************ 00:05:43.495 END TEST cpu_locks 00:05:43.495 ************************************ 00:05:43.495 20:31:17 event -- common/autotest_common.sh@1142 -- # return 0 00:05:43.495 00:05:43.495 real 0m40.893s 00:05:43.495 user 1m18.187s 00:05:43.495 sys 0m7.879s 00:05:43.495 20:31:17 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.495 20:31:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.495 ************************************ 00:05:43.495 END TEST event 00:05:43.495 ************************************ 00:05:43.495 20:31:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.495 20:31:17 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:43.495 20:31:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.495 20:31:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.495 20:31:17 -- common/autotest_common.sh@10 -- # set +x 00:05:43.495 ************************************ 00:05:43.495 START TEST thread 00:05:43.495 ************************************ 00:05:43.495 20:31:17 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:43.495 * Looking for test storage... 00:05:43.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:43.495 20:31:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.495 20:31:17 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:43.495 20:31:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.495 20:31:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.754 ************************************ 00:05:43.754 START TEST thread_poller_perf 00:05:43.754 ************************************ 00:05:43.754 20:31:18 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.754 [2024-07-15 20:31:18.026038] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:43.754 [2024-07-15 20:31:18.026109] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519506 ] 00:05:43.754 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.754 [2024-07-15 20:31:18.082394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.754 [2024-07-15 20:31:18.155166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.754 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:45.129 ====================================== 00:05:45.129 busy:2308084138 (cyc) 00:05:45.129 total_run_count: 406000 00:05:45.129 tsc_hz: 2300000000 (cyc) 00:05:45.129 ====================================== 00:05:45.129 poller_cost: 5684 (cyc), 2471 (nsec) 00:05:45.129 00:05:45.129 real 0m1.226s 00:05:45.129 user 0m1.149s 00:05:45.129 sys 0m0.073s 00:05:45.129 20:31:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.129 20:31:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.129 ************************************ 00:05:45.129 END TEST thread_poller_perf 00:05:45.129 ************************************ 00:05:45.129 20:31:19 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:45.129 20:31:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.129 20:31:19 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:45.129 20:31:19 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.129 20:31:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.129 ************************************ 00:05:45.129 START TEST thread_poller_perf 00:05:45.129 ************************************ 00:05:45.129 20:31:19 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.129 [2024-07-15 20:31:19.318703] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:45.129 [2024-07-15 20:31:19.318770] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519753 ] 00:05:45.129 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.129 [2024-07-15 20:31:19.375520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.129 [2024-07-15 20:31:19.446794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.129 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:46.114 ====================================== 00:05:46.114 busy:2301447576 (cyc) 00:05:46.114 total_run_count: 5301000 00:05:46.114 tsc_hz: 2300000000 (cyc) 00:05:46.114 ====================================== 00:05:46.114 poller_cost: 434 (cyc), 188 (nsec) 00:05:46.114 00:05:46.114 real 0m1.222s 00:05:46.114 user 0m1.144s 00:05:46.114 sys 0m0.074s 00:05:46.114 20:31:20 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.114 20:31:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 ************************************ 00:05:46.114 END TEST thread_poller_perf 00:05:46.114 ************************************ 00:05:46.114 20:31:20 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:46.114 20:31:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:46.114 00:05:46.114 real 0m2.663s 00:05:46.114 user 0m2.389s 00:05:46.114 sys 0m0.281s 00:05:46.114 20:31:20 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.114 20:31:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 ************************************ 00:05:46.114 END TEST thread 00:05:46.114 ************************************ 00:05:46.114 20:31:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.114 20:31:20 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:46.114 20:31:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.114 20:31:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.114 20:31:20 -- common/autotest_common.sh@10 -- # set +x 00:05:46.372 ************************************ 00:05:46.372 START TEST accel 00:05:46.372 ************************************ 00:05:46.372 20:31:20 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:46.372 * Looking for test storage... 00:05:46.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:46.372 20:31:20 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:46.372 20:31:20 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:46.372 20:31:20 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:46.372 20:31:20 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2520046 00:05:46.372 20:31:20 accel -- accel/accel.sh@63 -- # waitforlisten 2520046 00:05:46.372 20:31:20 accel -- common/autotest_common.sh@829 -- # '[' -z 2520046 ']' 00:05:46.372 20:31:20 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.372 20:31:20 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:46.372 20:31:20 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.372 20:31:20 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:46.372 20:31:20 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.372 20:31:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.372 20:31:20 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.372 20:31:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.372 20:31:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.372 20:31:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.372 20:31:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.372 20:31:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.372 20:31:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:46.372 20:31:20 accel -- accel/accel.sh@41 -- # jq -r . 00:05:46.372 [2024-07-15 20:31:20.759605] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:46.372 [2024-07-15 20:31:20.759648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520046 ] 00:05:46.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.372 [2024-07-15 20:31:20.814369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.630 [2024-07-15 20:31:20.888820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.198 20:31:21 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.198 20:31:21 accel -- common/autotest_common.sh@862 -- # return 0 00:05:47.198 20:31:21 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:47.198 20:31:21 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:47.198 20:31:21 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:47.198 20:31:21 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:47.199 20:31:21 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:47.199 20:31:21 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:47.199 20:31:21 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.199 20:31:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.199 20:31:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.199 20:31:21 accel -- accel/accel.sh@75 -- # killprocess 2520046 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@948 -- # '[' -z 2520046 ']' 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@952 -- # kill -0 2520046 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@953 -- # uname 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2520046 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2520046' 00:05:47.199 killing process with pid 2520046 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@967 -- # kill 2520046 00:05:47.199 20:31:21 accel -- common/autotest_common.sh@972 -- # wait 2520046 00:05:47.768 20:31:21 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:47.768 20:31:21 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:47.768 20:31:21 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:47.768 20:31:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.768 20:31:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.768 20:31:21 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:47.768 20:31:21 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:47.768 20:31:21 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:47.768 20:31:21 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.768 20:31:21 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.768 20:31:21 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.768 20:31:21 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.768 20:31:21 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.768 20:31:22 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:47.768 20:31:22 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:47.768 20:31:22 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.768 20:31:22 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:47.768 20:31:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.768 20:31:22 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:47.768 20:31:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:47.768 20:31:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.768 20:31:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.768 ************************************ 00:05:47.768 START TEST accel_missing_filename 00:05:47.768 ************************************ 00:05:47.768 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:47.768 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:47.768 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:47.768 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:47.768 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.768 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:47.768 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.768 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:47.768 20:31:22 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:47.768 [2024-07-15 20:31:22.122782] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:47.768 [2024-07-15 20:31:22.122856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520310 ] 00:05:47.768 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.768 [2024-07-15 20:31:22.179903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.028 [2024-07-15 20:31:22.256037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.028 [2024-07-15 20:31:22.297169] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.028 [2024-07-15 20:31:22.357314] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:48.028 A filename is required. 00:05:48.028 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:48.028 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.028 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:48.028 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.028 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:48.028 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.028 00:05:48.028 real 0m0.338s 00:05:48.028 user 0m0.259s 00:05:48.028 sys 0m0.118s 00:05:48.028 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.028 20:31:22 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:48.028 ************************************ 00:05:48.028 END TEST accel_missing_filename 00:05:48.028 ************************************ 00:05:48.028 20:31:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.028 20:31:22 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.028 20:31:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:48.028 20:31:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.028 20:31:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.028 ************************************ 00:05:48.028 START TEST accel_compress_verify 00:05:48.028 ************************************ 00:05:48.028 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.028 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:48.028 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.028 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:48.028 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.028 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:48.028 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.028 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:48.028 20:31:22 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:48.288 [2024-07-15 20:31:22.528451] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:48.288 [2024-07-15 20:31:22.528521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520336 ] 00:05:48.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.288 [2024-07-15 20:31:22.584991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.288 [2024-07-15 20:31:22.658010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.288 [2024-07-15 20:31:22.699291] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.288 [2024-07-15 20:31:22.758626] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:48.548 00:05:48.548 Compression does not support the verify option, aborting. 00:05:48.548 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:48.548 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.548 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:48.548 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.548 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:48.548 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.548 00:05:48.548 real 0m0.332s 00:05:48.548 user 0m0.252s 00:05:48.548 sys 0m0.121s 00:05:48.548 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.548 20:31:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:48.548 ************************************ 00:05:48.548 END TEST accel_compress_verify 00:05:48.548 ************************************ 00:05:48.548 20:31:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.548 20:31:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:48.548 20:31:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:48.548 20:31:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.548 20:31:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.548 ************************************ 00:05:48.548 START TEST accel_wrong_workload 00:05:48.548 ************************************ 00:05:48.548 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:48.548 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:48.548 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:48.548 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:48.548 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.548 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:48.548 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.548 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:48.548 20:31:22 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:48.548 Unsupported workload type: foobar 00:05:48.548 [2024-07-15 20:31:22.918964] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:48.548 accel_perf options: 00:05:48.548 [-h help message] 00:05:48.548 [-q queue depth per core] 00:05:48.548 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:48.548 [-T number of threads per core 00:05:48.548 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:48.548 [-t time in seconds] 00:05:48.548 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:48.548 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:48.548 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:48.548 [-l for compress/decompress workloads, name of uncompressed input file 00:05:48.548 [-S for crc32c workload, use this seed value (default 0) 00:05:48.549 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:48.549 [-f for fill workload, use this BYTE value (default 255) 00:05:48.549 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:48.549 [-y verify result if this switch is on] 00:05:48.549 [-a tasks to allocate per core (default: same value as -q)] 00:05:48.549 Can be used to spread operations across a wider range of memory. 00:05:48.549 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:48.549 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.549 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.549 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.549 00:05:48.549 real 0m0.034s 00:05:48.549 user 0m0.019s 00:05:48.549 sys 0m0.015s 00:05:48.549 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.549 20:31:22 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:48.549 ************************************ 00:05:48.549 END TEST accel_wrong_workload 00:05:48.549 ************************************ 00:05:48.549 Error: writing output failed: Broken pipe 00:05:48.549 20:31:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.549 20:31:22 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:48.549 20:31:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:48.549 20:31:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.549 20:31:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.549 ************************************ 00:05:48.549 START TEST accel_negative_buffers 00:05:48.549 ************************************ 00:05:48.549 20:31:22 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:48.549 20:31:22 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:48.549 20:31:22 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:48.549 20:31:22 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:48.549 20:31:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.549 20:31:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:48.549 20:31:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.549 20:31:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:48.549 20:31:22 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:48.549 -x option must be non-negative. 00:05:48.549 [2024-07-15 20:31:23.019559] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:48.549 accel_perf options: 00:05:48.549 [-h help message] 00:05:48.549 [-q queue depth per core] 00:05:48.549 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:48.549 [-T number of threads per core 00:05:48.549 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:48.549 [-t time in seconds] 00:05:48.549 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:48.549 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:48.549 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:48.549 [-l for compress/decompress workloads, name of uncompressed input file 00:05:48.549 [-S for crc32c workload, use this seed value (default 0) 00:05:48.549 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:48.549 [-f for fill workload, use this BYTE value (default 255) 00:05:48.549 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:48.549 [-y verify result if this switch is on] 00:05:48.549 [-a tasks to allocate per core (default: same value as -q)] 00:05:48.549 Can be used to spread operations across a wider range of memory. 00:05:48.549 20:31:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:48.549 20:31:23 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.549 20:31:23 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.549 20:31:23 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.549 00:05:48.549 real 0m0.034s 00:05:48.549 user 0m0.022s 00:05:48.549 sys 0m0.011s 00:05:48.549 20:31:23 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.549 20:31:23 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:48.549 ************************************ 00:05:48.549 END TEST accel_negative_buffers 00:05:48.549 ************************************ 00:05:48.808 Error: writing output failed: Broken pipe 00:05:48.808 20:31:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.808 20:31:23 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:48.808 20:31:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:48.808 20:31:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.808 20:31:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.808 ************************************ 00:05:48.808 START TEST accel_crc32c 00:05:48.808 ************************************ 00:05:48.808 20:31:23 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:48.808 20:31:23 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:48.808 20:31:23 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:48.808 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.808 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.808 20:31:23 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:48.808 20:31:23 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:48.809 [2024-07-15 20:31:23.119132] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:48.809 [2024-07-15 20:31:23.119192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520554 ] 00:05:48.809 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.809 [2024-07-15 20:31:23.173636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.809 [2024-07-15 20:31:23.249513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.809 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.067 20:31:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:50.004 20:31:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.004 00:05:50.004 real 0m1.337s 00:05:50.004 user 0m1.240s 00:05:50.004 sys 0m0.111s 00:05:50.004 20:31:24 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.004 20:31:24 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:50.004 ************************************ 00:05:50.004 END TEST accel_crc32c 00:05:50.004 ************************************ 00:05:50.004 20:31:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.004 20:31:24 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:50.004 20:31:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:50.004 20:31:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.004 20:31:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.296 ************************************ 00:05:50.296 START TEST accel_crc32c_C2 00:05:50.296 ************************************ 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:50.296 [2024-07-15 20:31:24.520440] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:50.296 [2024-07-15 20:31:24.520514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520827 ] 00:05:50.296 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.296 [2024-07-15 20:31:24.576123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.296 [2024-07-15 20:31:24.647905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.296 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.297 20:31:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.672 00:05:51.672 real 0m1.337s 00:05:51.672 user 0m1.231s 00:05:51.672 sys 0m0.120s 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.672 20:31:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:51.672 ************************************ 00:05:51.672 END TEST accel_crc32c_C2 00:05:51.672 ************************************ 00:05:51.672 20:31:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.672 20:31:25 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:51.672 20:31:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.672 20:31:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.672 20:31:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.672 ************************************ 00:05:51.672 START TEST accel_copy 00:05:51.672 ************************************ 00:05:51.672 20:31:25 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:51.672 20:31:25 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:51.672 [2024-07-15 20:31:25.924292] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:51.672 [2024-07-15 20:31:25.924356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521105 ] 00:05:51.672 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.672 [2024-07-15 20:31:25.979933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.672 [2024-07-15 20:31:26.051473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.672 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.673 20:31:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:53.048 20:31:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.048 00:05:53.048 real 0m1.336s 00:05:53.048 user 0m1.233s 00:05:53.048 sys 0m0.117s 00:05:53.048 20:31:27 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.048 20:31:27 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:53.048 ************************************ 00:05:53.048 END TEST accel_copy 00:05:53.048 ************************************ 00:05:53.048 20:31:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.048 20:31:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.048 20:31:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:53.048 20:31:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.048 20:31:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.048 ************************************ 00:05:53.048 START TEST accel_fill 00:05:53.048 ************************************ 00:05:53.048 20:31:27 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.048 20:31:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:53.048 20:31:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:53.048 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 20:31:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.048 20:31:27 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:53.049 [2024-07-15 20:31:27.329139] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:53.049 [2024-07-15 20:31:27.329204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521365 ] 00:05:53.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.049 [2024-07-15 20:31:27.385369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.049 [2024-07-15 20:31:27.454805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.049 20:31:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:54.425 20:31:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.425 00:05:54.425 real 0m1.334s 00:05:54.425 user 0m1.239s 00:05:54.425 sys 0m0.109s 00:05:54.425 20:31:28 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.425 20:31:28 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:54.425 ************************************ 00:05:54.425 END TEST accel_fill 00:05:54.425 ************************************ 00:05:54.425 20:31:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.425 20:31:28 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:54.425 20:31:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:54.425 20:31:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.425 20:31:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.425 ************************************ 00:05:54.425 START TEST accel_copy_crc32c 00:05:54.425 ************************************ 00:05:54.425 20:31:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:54.425 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:54.425 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:54.425 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.425 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.425 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:54.425 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:54.425 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:54.426 [2024-07-15 20:31:28.731434] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:54.426 [2024-07-15 20:31:28.731492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521612 ] 00:05:54.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.426 [2024-07-15 20:31:28.787108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.426 [2024-07-15 20:31:28.859118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.426 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:54.684 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:31:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.622 00:05:55.622 real 0m1.336s 00:05:55.622 user 0m1.238s 00:05:55.622 sys 0m0.114s 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.622 20:31:30 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:55.622 ************************************ 00:05:55.622 END TEST accel_copy_crc32c 00:05:55.622 ************************************ 00:05:55.622 20:31:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.622 20:31:30 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:55.622 20:31:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:55.622 20:31:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.622 20:31:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.882 ************************************ 00:05:55.882 START TEST accel_copy_crc32c_C2 00:05:55.882 ************************************ 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:55.882 [2024-07-15 20:31:30.134975] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:55.882 [2024-07-15 20:31:30.135023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521870 ] 00:05:55.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.882 [2024-07-15 20:31:30.190702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.882 [2024-07-15 20:31:30.268554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.882 20:31:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.262 00:05:57.262 real 0m1.343s 00:05:57.262 user 0m1.246s 00:05:57.262 sys 0m0.110s 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.262 20:31:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:57.262 ************************************ 00:05:57.262 END TEST accel_copy_crc32c_C2 00:05:57.262 ************************************ 00:05:57.262 20:31:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.262 20:31:31 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:57.262 20:31:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:57.262 20:31:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.262 20:31:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.262 ************************************ 00:05:57.262 START TEST accel_dualcast 00:05:57.262 ************************************ 00:05:57.262 20:31:31 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:57.262 [2024-07-15 20:31:31.542888] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:57.262 [2024-07-15 20:31:31.542937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522117 ] 00:05:57.262 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.262 [2024-07-15 20:31:31.598509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.262 [2024-07-15 20:31:31.673417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.262 20:31:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:58.641 20:31:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.641 00:05:58.642 real 0m1.338s 00:05:58.642 user 0m1.241s 00:05:58.642 sys 0m0.110s 00:05:58.642 20:31:32 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.642 20:31:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:58.642 ************************************ 00:05:58.642 END TEST accel_dualcast 00:05:58.642 ************************************ 00:05:58.642 20:31:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.642 20:31:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:58.642 20:31:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:58.642 20:31:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.642 20:31:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.642 ************************************ 00:05:58.642 START TEST accel_compare 00:05:58.642 ************************************ 00:05:58.642 20:31:32 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:58.642 20:31:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:58.642 [2024-07-15 20:31:32.947782] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:05:58.642 [2024-07-15 20:31:32.947829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522369 ] 00:05:58.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.642 [2024-07-15 20:31:33.002376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.642 [2024-07-15 20:31:33.072134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.642 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:58.900 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.900 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.900 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.900 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:58.900 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.900 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.900 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.900 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.901 20:31:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:59.835 20:31:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.835 00:05:59.835 real 0m1.333s 00:05:59.835 user 0m1.227s 00:05:59.835 sys 0m0.119s 00:05:59.835 20:31:34 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.835 20:31:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:59.835 ************************************ 00:05:59.835 END TEST accel_compare 00:05:59.835 ************************************ 00:05:59.835 20:31:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.835 20:31:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:59.835 20:31:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:59.835 20:31:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.835 20:31:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.097 ************************************ 00:06:00.097 START TEST accel_xor 00:06:00.097 ************************************ 00:06:00.097 20:31:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:00.097 [2024-07-15 20:31:34.349268] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:00.097 [2024-07-15 20:31:34.349317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522617 ] 00:06:00.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.097 [2024-07-15 20:31:34.403343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.097 [2024-07-15 20:31:34.475079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.097 20:31:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.470 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.470 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.470 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.470 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.470 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.470 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.470 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.470 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.471 00:06:01.471 real 0m1.334s 00:06:01.471 user 0m1.224s 00:06:01.471 sys 0m0.124s 00:06:01.471 20:31:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.471 20:31:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:01.471 ************************************ 00:06:01.471 END TEST accel_xor 00:06:01.471 ************************************ 00:06:01.471 20:31:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.471 20:31:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:01.471 20:31:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:01.471 20:31:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.471 20:31:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.471 ************************************ 00:06:01.471 START TEST accel_xor 00:06:01.471 ************************************ 00:06:01.471 20:31:35 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:01.471 [2024-07-15 20:31:35.750601] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:01.471 [2024-07-15 20:31:35.750670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522864 ] 00:06:01.471 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.471 [2024-07-15 20:31:35.806774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.471 [2024-07-15 20:31:35.879683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.471 20:31:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:02.867 20:31:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.867 00:06:02.867 real 0m1.339s 00:06:02.867 user 0m1.237s 00:06:02.867 sys 0m0.115s 00:06:02.867 20:31:37 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.867 20:31:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:02.867 ************************************ 00:06:02.867 END TEST accel_xor 00:06:02.867 ************************************ 00:06:02.867 20:31:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.867 20:31:37 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:02.867 20:31:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:02.867 20:31:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.867 20:31:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.867 ************************************ 00:06:02.867 START TEST accel_dif_verify 00:06:02.867 ************************************ 00:06:02.867 20:31:37 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:02.867 20:31:37 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:02.867 20:31:37 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:02.867 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.867 20:31:37 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:02.867 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.867 20:31:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:02.868 [2024-07-15 20:31:37.153191] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:02.868 [2024-07-15 20:31:37.153260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523115 ] 00:06:02.868 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.868 [2024-07-15 20:31:37.210591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.868 [2024-07-15 20:31:37.282716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.868 20:31:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:04.246 20:31:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.246 00:06:04.246 real 0m1.334s 00:06:04.246 user 0m1.230s 00:06:04.246 sys 0m0.120s 00:06:04.246 20:31:38 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.246 20:31:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:04.246 ************************************ 00:06:04.246 END TEST accel_dif_verify 00:06:04.246 ************************************ 00:06:04.246 20:31:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.246 20:31:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:04.246 20:31:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:04.246 20:31:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.246 20:31:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.246 ************************************ 00:06:04.246 START TEST accel_dif_generate 00:06:04.246 ************************************ 00:06:04.246 20:31:38 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:04.246 [2024-07-15 20:31:38.535082] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:04.246 [2024-07-15 20:31:38.535120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523364 ] 00:06:04.246 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.246 [2024-07-15 20:31:38.587942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.246 [2024-07-15 20:31:38.659765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.246 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.247 20:31:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:05.646 20:31:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.646 00:06:05.646 real 0m1.319s 00:06:05.646 user 0m1.223s 00:06:05.646 sys 0m0.111s 00:06:05.646 20:31:39 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.646 20:31:39 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:05.646 ************************************ 00:06:05.646 END TEST accel_dif_generate 00:06:05.646 ************************************ 00:06:05.646 20:31:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.646 20:31:39 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:05.646 20:31:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:05.646 20:31:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.646 20:31:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.646 ************************************ 00:06:05.646 START TEST accel_dif_generate_copy 00:06:05.646 ************************************ 00:06:05.646 20:31:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:05.646 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:05.646 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:05.646 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:05.646 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:05.647 20:31:39 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:05.647 [2024-07-15 20:31:39.920597] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:05.647 [2024-07-15 20:31:39.920637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523611 ] 00:06:05.647 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.647 [2024-07-15 20:31:39.973629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.647 [2024-07-15 20:31:40.058542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.647 20:31:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.024 00:06:07.024 real 0m1.332s 00:06:07.024 user 0m1.234s 00:06:07.024 sys 0m0.111s 00:06:07.024 20:31:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.025 20:31:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:07.025 ************************************ 00:06:07.025 END TEST accel_dif_generate_copy 00:06:07.025 ************************************ 00:06:07.025 20:31:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.025 20:31:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:07.025 20:31:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.025 20:31:41 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:07.025 20:31:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.025 20:31:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.025 ************************************ 00:06:07.025 START TEST accel_comp 00:06:07.025 ************************************ 00:06:07.025 20:31:41 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:07.025 [2024-07-15 20:31:41.335956] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:07.025 [2024-07-15 20:31:41.336008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523863 ] 00:06:07.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.025 [2024-07-15 20:31:41.390170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.025 [2024-07-15 20:31:41.462950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.025 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.284 20:31:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:08.233 20:31:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.233 00:06:08.233 real 0m1.336s 00:06:08.233 user 0m1.239s 00:06:08.233 sys 0m0.110s 00:06:08.233 20:31:42 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.233 20:31:42 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:08.233 ************************************ 00:06:08.233 END TEST accel_comp 00:06:08.233 ************************************ 00:06:08.233 20:31:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.233 20:31:42 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:08.233 20:31:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.233 20:31:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.233 20:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.557 ************************************ 00:06:08.557 START TEST accel_decomp 00:06:08.557 ************************************ 00:06:08.557 20:31:42 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:08.557 [2024-07-15 20:31:42.730761] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:08.557 [2024-07-15 20:31:42.730802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524112 ] 00:06:08.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.557 [2024-07-15 20:31:42.783064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.557 [2024-07-15 20:31:42.855557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.557 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.558 20:31:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.937 20:31:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.937 00:06:09.937 real 0m1.323s 00:06:09.937 user 0m1.234s 00:06:09.937 sys 0m0.103s 00:06:09.937 20:31:44 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.937 20:31:44 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:09.937 ************************************ 00:06:09.937 END TEST accel_decomp 00:06:09.937 ************************************ 00:06:09.937 20:31:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.937 20:31:44 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:09.937 20:31:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:09.937 20:31:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.937 20:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.937 ************************************ 00:06:09.937 START TEST accel_decomp_full 00:06:09.937 ************************************ 00:06:09.937 20:31:44 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:09.937 20:31:44 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:09.938 [2024-07-15 20:31:44.129768] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:09.938 [2024-07-15 20:31:44.129831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524370 ] 00:06:09.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.938 [2024-07-15 20:31:44.184951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.938 [2024-07-15 20:31:44.256185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.938 20:31:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.318 20:31:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:11.319 20:31:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.319 00:06:11.319 real 0m1.345s 00:06:11.319 user 0m1.246s 00:06:11.319 sys 0m0.113s 00:06:11.319 20:31:45 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.319 20:31:45 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:11.319 ************************************ 00:06:11.319 END TEST accel_decomp_full 00:06:11.319 ************************************ 00:06:11.319 20:31:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.319 20:31:45 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:11.319 20:31:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:11.319 20:31:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.319 20:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.319 ************************************ 00:06:11.319 START TEST accel_decomp_mcore 00:06:11.319 ************************************ 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:11.319 [2024-07-15 20:31:45.539807] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:11.319 [2024-07-15 20:31:45.539858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524616 ] 00:06:11.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.319 [2024-07-15 20:31:45.593077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.319 [2024-07-15 20:31:45.667170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.319 [2024-07-15 20:31:45.667269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.319 [2024-07-15 20:31:45.667342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.319 [2024-07-15 20:31:45.667344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.319 20:31:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.698 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.699 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:12.699 20:31:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.699 00:06:12.699 real 0m1.345s 00:06:12.699 user 0m4.563s 00:06:12.699 sys 0m0.126s 00:06:12.699 20:31:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.699 20:31:46 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:12.699 ************************************ 00:06:12.699 END TEST accel_decomp_mcore 00:06:12.699 ************************************ 00:06:12.699 20:31:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.699 20:31:46 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.699 20:31:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:12.699 20:31:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.699 20:31:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.699 ************************************ 00:06:12.699 START TEST accel_decomp_full_mcore 00:06:12.699 ************************************ 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:12.699 20:31:46 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:12.699 [2024-07-15 20:31:46.954147] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:12.699 [2024-07-15 20:31:46.954196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524866 ] 00:06:12.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.699 [2024-07-15 20:31:47.008447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.699 [2024-07-15 20:31:47.082497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.699 [2024-07-15 20:31:47.082594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.699 [2024-07-15 20:31:47.082657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.699 [2024-07-15 20:31:47.082658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.699 20:31:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.079 20:31:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.080 00:06:14.080 real 0m1.355s 00:06:14.080 user 0m4.600s 00:06:14.080 sys 0m0.123s 00:06:14.080 20:31:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.080 20:31:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:14.080 ************************************ 00:06:14.080 END TEST accel_decomp_full_mcore 00:06:14.080 ************************************ 00:06:14.080 20:31:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.080 20:31:48 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:14.080 20:31:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:14.080 20:31:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.080 20:31:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.080 ************************************ 00:06:14.080 START TEST accel_decomp_mthread 00:06:14.080 ************************************ 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:14.080 [2024-07-15 20:31:48.368197] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:14.080 [2024-07-15 20:31:48.368246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525123 ] 00:06:14.080 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.080 [2024-07-15 20:31:48.415476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.080 [2024-07-15 20:31:48.487812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.080 20:31:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.459 00:06:15.459 real 0m1.322s 00:06:15.459 user 0m1.233s 00:06:15.459 sys 0m0.104s 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.459 20:31:49 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:15.459 ************************************ 00:06:15.459 END TEST accel_decomp_mthread 00:06:15.459 ************************************ 00:06:15.459 20:31:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.459 20:31:49 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:15.459 20:31:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:15.459 20:31:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.459 20:31:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.459 ************************************ 00:06:15.459 START TEST accel_decomp_full_mthread 00:06:15.459 ************************************ 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:15.459 [2024-07-15 20:31:49.756581] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:15.459 [2024-07-15 20:31:49.756626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525370 ] 00:06:15.459 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.459 [2024-07-15 20:31:49.804579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.459 [2024-07-15 20:31:49.876443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.459 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.460 20:31:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.839 00:06:16.839 real 0m1.343s 00:06:16.839 user 0m1.252s 00:06:16.839 sys 0m0.104s 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.839 20:31:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:16.839 ************************************ 00:06:16.839 END TEST accel_decomp_full_mthread 00:06:16.839 ************************************ 00:06:16.839 20:31:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.839 20:31:51 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:16.839 20:31:51 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:16.839 20:31:51 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:16.839 20:31:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.839 20:31:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.839 20:31:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.839 20:31:51 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:16.839 20:31:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.839 20:31:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.839 20:31:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:16.839 20:31:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.839 20:31:51 accel -- accel/accel.sh@41 -- # jq -r . 00:06:16.839 20:31:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.839 ************************************ 00:06:16.839 START TEST accel_dif_functional_tests 00:06:16.839 ************************************ 00:06:16.839 20:31:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:16.839 [2024-07-15 20:31:51.195177] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:16.839 [2024-07-15 20:31:51.195213] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525622 ] 00:06:16.839 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.839 [2024-07-15 20:31:51.246871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.839 [2024-07-15 20:31:51.320182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.839 [2024-07-15 20:31:51.320280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.839 [2024-07-15 20:31:51.320282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.098 00:06:17.099 00:06:17.099 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.099 http://cunit.sourceforge.net/ 00:06:17.099 00:06:17.099 00:06:17.099 Suite: accel_dif 00:06:17.099 Test: verify: DIF generated, GUARD check ...passed 00:06:17.099 Test: verify: DIF generated, APPTAG check ...passed 00:06:17.099 Test: verify: DIF generated, REFTAG check ...passed 00:06:17.099 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:31:51.388440] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:17.099 passed 00:06:17.099 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:31:51.388487] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:17.099 passed 00:06:17.099 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:31:51.388505] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:17.099 passed 00:06:17.099 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:17.099 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:31:51.388547] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:17.099 passed 00:06:17.099 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:17.099 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:17.099 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:17.099 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:31:51.388646] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:17.099 passed 00:06:17.099 Test: verify copy: DIF generated, GUARD check ...passed 00:06:17.099 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:17.099 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:17.099 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:31:51.388755] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:17.099 passed 00:06:17.099 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:31:51.388775] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:17.099 passed 00:06:17.099 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:31:51.388794] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:17.099 passed 00:06:17.099 Test: generate copy: DIF generated, GUARD check ...passed 00:06:17.099 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:17.099 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:17.099 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:17.099 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:17.099 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:17.099 Test: generate copy: iovecs-len validate ...[2024-07-15 20:31:51.388956] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:17.099 passed 00:06:17.099 Test: generate copy: buffer alignment validate ...passed 00:06:17.099 00:06:17.099 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.099 suites 1 1 n/a 0 0 00:06:17.099 tests 26 26 26 0 0 00:06:17.099 asserts 115 115 115 0 n/a 00:06:17.099 00:06:17.099 Elapsed time = 0.000 seconds 00:06:17.099 00:06:17.099 real 0m0.405s 00:06:17.099 user 0m0.585s 00:06:17.099 sys 0m0.134s 00:06:17.099 20:31:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.099 20:31:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:17.099 ************************************ 00:06:17.099 END TEST accel_dif_functional_tests 00:06:17.099 ************************************ 00:06:17.358 20:31:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.358 00:06:17.358 real 0m30.969s 00:06:17.358 user 0m34.795s 00:06:17.358 sys 0m4.235s 00:06:17.358 20:31:51 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.358 20:31:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.358 ************************************ 00:06:17.358 END TEST accel 00:06:17.358 ************************************ 00:06:17.358 20:31:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.358 20:31:51 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:17.358 20:31:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.358 20:31:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.358 20:31:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.358 ************************************ 00:06:17.358 START TEST accel_rpc 00:06:17.358 ************************************ 00:06:17.358 20:31:51 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:17.358 * Looking for test storage... 00:06:17.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:17.358 20:31:51 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.358 20:31:51 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2525690 00:06:17.358 20:31:51 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2525690 00:06:17.358 20:31:51 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2525690 ']' 00:06:17.358 20:31:51 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.358 20:31:51 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.358 20:31:51 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.358 20:31:51 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.358 20:31:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.358 20:31:51 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:17.358 [2024-07-15 20:31:51.765798] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:17.358 [2024-07-15 20:31:51.765849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525690 ] 00:06:17.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.358 [2024-07-15 20:31:51.818521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.617 [2024-07-15 20:31:51.899695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.184 20:31:52 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.184 20:31:52 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:18.184 20:31:52 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:18.184 20:31:52 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:18.184 20:31:52 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:18.184 20:31:52 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:18.184 20:31:52 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:18.184 20:31:52 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.184 20:31:52 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.184 20:31:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.184 ************************************ 00:06:18.184 START TEST accel_assign_opcode 00:06:18.184 ************************************ 00:06:18.184 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:18.184 20:31:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:18.184 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.184 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.184 [2024-07-15 20:31:52.581699] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:18.184 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.184 20:31:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:18.185 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.185 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.185 [2024-07-15 20:31:52.589715] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:18.185 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.185 20:31:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:18.185 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.185 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.444 software 00:06:18.444 00:06:18.444 real 0m0.232s 00:06:18.444 user 0m0.043s 00:06:18.444 sys 0m0.007s 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.444 20:31:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:18.444 ************************************ 00:06:18.444 END TEST accel_assign_opcode 00:06:18.444 ************************************ 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:18.444 20:31:52 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2525690 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2525690 ']' 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2525690 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2525690 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2525690' 00:06:18.444 killing process with pid 2525690 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@967 -- # kill 2525690 00:06:18.444 20:31:52 accel_rpc -- common/autotest_common.sh@972 -- # wait 2525690 00:06:18.703 00:06:18.703 real 0m1.533s 00:06:18.703 user 0m1.608s 00:06:18.703 sys 0m0.385s 00:06:18.703 20:31:53 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.703 20:31:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.703 ************************************ 00:06:18.703 END TEST accel_rpc 00:06:18.703 ************************************ 00:06:18.971 20:31:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.971 20:31:53 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:18.971 20:31:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.971 20:31:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.971 20:31:53 -- common/autotest_common.sh@10 -- # set +x 00:06:18.971 ************************************ 00:06:18.971 START TEST app_cmdline 00:06:18.971 ************************************ 00:06:18.971 20:31:53 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:18.971 * Looking for test storage... 00:06:18.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.971 20:31:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:18.971 20:31:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2526088 00:06:18.971 20:31:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2526088 00:06:18.971 20:31:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:18.971 20:31:53 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2526088 ']' 00:06:18.971 20:31:53 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.971 20:31:53 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.971 20:31:53 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.971 20:31:53 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.971 20:31:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.971 [2024-07-15 20:31:53.384511] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:18.971 [2024-07-15 20:31:53.384561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526088 ] 00:06:18.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.971 [2024-07-15 20:31:53.437582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.237 [2024-07-15 20:31:53.519999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.804 20:31:54 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.804 20:31:54 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:19.804 20:31:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:20.063 { 00:06:20.063 "version": "SPDK v24.09-pre git sha1 f604975ba", 00:06:20.063 "fields": { 00:06:20.063 "major": 24, 00:06:20.063 "minor": 9, 00:06:20.063 "patch": 0, 00:06:20.063 "suffix": "-pre", 00:06:20.063 "commit": "f604975ba" 00:06:20.063 } 00:06:20.063 } 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:20.063 20:31:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:20.063 20:31:54 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.322 request: 00:06:20.322 { 00:06:20.322 "method": "env_dpdk_get_mem_stats", 00:06:20.322 "req_id": 1 00:06:20.322 } 00:06:20.322 Got JSON-RPC error response 00:06:20.322 response: 00:06:20.322 { 00:06:20.322 "code": -32601, 00:06:20.322 "message": "Method not found" 00:06:20.322 } 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.322 20:31:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2526088 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2526088 ']' 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2526088 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526088 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526088' 00:06:20.322 killing process with pid 2526088 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@967 -- # kill 2526088 00:06:20.322 20:31:54 app_cmdline -- common/autotest_common.sh@972 -- # wait 2526088 00:06:20.580 00:06:20.580 real 0m1.684s 00:06:20.580 user 0m2.044s 00:06:20.580 sys 0m0.401s 00:06:20.581 20:31:54 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.581 20:31:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.581 ************************************ 00:06:20.581 END TEST app_cmdline 00:06:20.581 ************************************ 00:06:20.581 20:31:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.581 20:31:54 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:20.581 20:31:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.581 20:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.581 20:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.581 ************************************ 00:06:20.581 START TEST version 00:06:20.581 ************************************ 00:06:20.581 20:31:54 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:20.839 * Looking for test storage... 00:06:20.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:20.839 20:31:55 version -- app/version.sh@17 -- # get_header_version major 00:06:20.839 20:31:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:20.839 20:31:55 version -- app/version.sh@14 -- # cut -f2 00:06:20.839 20:31:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.839 20:31:55 version -- app/version.sh@17 -- # major=24 00:06:20.839 20:31:55 version -- app/version.sh@18 -- # get_header_version minor 00:06:20.839 20:31:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:20.839 20:31:55 version -- app/version.sh@14 -- # cut -f2 00:06:20.839 20:31:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.839 20:31:55 version -- app/version.sh@18 -- # minor=9 00:06:20.839 20:31:55 version -- app/version.sh@19 -- # get_header_version patch 00:06:20.839 20:31:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:20.839 20:31:55 version -- app/version.sh@14 -- # cut -f2 00:06:20.839 20:31:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.839 20:31:55 version -- app/version.sh@19 -- # patch=0 00:06:20.839 20:31:55 version -- app/version.sh@20 -- # get_header_version suffix 00:06:20.839 20:31:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:20.839 20:31:55 version -- app/version.sh@14 -- # cut -f2 00:06:20.839 20:31:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.839 20:31:55 version -- app/version.sh@20 -- # suffix=-pre 00:06:20.839 20:31:55 version -- app/version.sh@22 -- # version=24.9 00:06:20.839 20:31:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:20.839 20:31:55 version -- app/version.sh@28 -- # version=24.9rc0 00:06:20.839 20:31:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:20.839 20:31:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:20.839 20:31:55 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:20.839 20:31:55 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:20.839 00:06:20.839 real 0m0.146s 00:06:20.839 user 0m0.071s 00:06:20.839 sys 0m0.109s 00:06:20.839 20:31:55 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.839 20:31:55 version -- common/autotest_common.sh@10 -- # set +x 00:06:20.839 ************************************ 00:06:20.839 END TEST version 00:06:20.839 ************************************ 00:06:20.839 20:31:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.839 20:31:55 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:20.839 20:31:55 -- spdk/autotest.sh@198 -- # uname -s 00:06:20.839 20:31:55 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:20.839 20:31:55 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:20.839 20:31:55 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:20.839 20:31:55 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:20.839 20:31:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:20.839 20:31:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:20.839 20:31:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.839 20:31:55 -- common/autotest_common.sh@10 -- # set +x 00:06:20.839 20:31:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:20.839 20:31:55 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:20.839 20:31:55 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:20.839 20:31:55 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:20.839 20:31:55 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:20.839 20:31:55 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:20.839 20:31:55 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:20.839 20:31:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:20.839 20:31:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.839 20:31:55 -- common/autotest_common.sh@10 -- # set +x 00:06:20.839 ************************************ 00:06:20.839 START TEST nvmf_tcp 00:06:20.839 ************************************ 00:06:20.839 20:31:55 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:20.839 * Looking for test storage... 00:06:20.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:20.839 20:31:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:20.839 20:31:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:20.839 20:31:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.840 20:31:55 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.098 20:31:55 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.098 20:31:55 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.098 20:31:55 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.098 20:31:55 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.098 20:31:55 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.098 20:31:55 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.098 20:31:55 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:21.098 20:31:55 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.098 20:31:55 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:21.099 20:31:55 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.099 20:31:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:21.099 20:31:55 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:21.099 20:31:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.099 20:31:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.099 20:31:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.099 ************************************ 00:06:21.099 START TEST nvmf_example 00:06:21.099 ************************************ 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:21.099 * Looking for test storage... 00:06:21.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:21.099 20:31:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:26.364 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:26.364 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:26.364 Found net devices under 0000:86:00.0: cvl_0_0 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:26.364 Found net devices under 0000:86:00.1: cvl_0_1 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.364 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:26.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:06:26.365 00:06:26.365 --- 10.0.0.2 ping statistics --- 00:06:26.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.365 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:06:26.365 00:06:26.365 --- 10.0.0.1 ping statistics --- 00:06:26.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.365 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2529602 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2529602 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2529602 ']' 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.365 20:32:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:26.365 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.972 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.972 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:26.972 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:26.972 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.972 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:27.251 20:32:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:27.251 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.223 Initializing NVMe Controllers 00:06:37.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:37.223 Initialization complete. Launching workers. 00:06:37.223 ======================================================== 00:06:37.223 Latency(us) 00:06:37.223 Device Information : IOPS MiB/s Average min max 00:06:37.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17873.45 69.82 3581.07 551.50 16351.75 00:06:37.223 ======================================================== 00:06:37.223 Total : 17873.45 69.82 3581.07 551.50 16351.75 00:06:37.223 00:06:37.223 20:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:37.223 20:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:37.223 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:37.223 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:37.223 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:37.223 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:37.223 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:37.223 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:37.223 rmmod nvme_tcp 00:06:37.481 rmmod nvme_fabrics 00:06:37.481 rmmod nvme_keyring 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2529602 ']' 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2529602 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2529602 ']' 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2529602 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2529602 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2529602' 00:06:37.481 killing process with pid 2529602 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2529602 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2529602 00:06:37.481 nvmf threads initialize successfully 00:06:37.481 bdev subsystem init successfully 00:06:37.481 created a nvmf target service 00:06:37.481 create targets's poll groups done 00:06:37.481 all subsystems of target started 00:06:37.481 nvmf target is running 00:06:37.481 all subsystems of target stopped 00:06:37.481 destroy targets's poll groups done 00:06:37.481 destroyed the nvmf target service 00:06:37.481 bdev subsystem finish successfully 00:06:37.481 nvmf threads destroy successfully 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:37.481 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:37.739 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:37.739 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:37.739 20:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.739 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.739 20:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.643 20:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:39.643 20:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:39.643 20:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.643 20:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.643 00:06:39.643 real 0m18.693s 00:06:39.643 user 0m45.583s 00:06:39.643 sys 0m5.170s 00:06:39.643 20:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.643 20:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.643 ************************************ 00:06:39.643 END TEST nvmf_example 00:06:39.643 ************************************ 00:06:39.643 20:32:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:39.643 20:32:14 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:39.643 20:32:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:39.643 20:32:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.643 20:32:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.643 ************************************ 00:06:39.643 START TEST nvmf_filesystem 00:06:39.643 ************************************ 00:06:39.643 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:39.906 * Looking for test storage... 00:06:39.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:39.906 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:39.906 #define SPDK_CONFIG_H 00:06:39.906 #define SPDK_CONFIG_APPS 1 00:06:39.906 #define SPDK_CONFIG_ARCH native 00:06:39.906 #undef SPDK_CONFIG_ASAN 00:06:39.906 #undef SPDK_CONFIG_AVAHI 00:06:39.906 #undef SPDK_CONFIG_CET 00:06:39.906 #define SPDK_CONFIG_COVERAGE 1 00:06:39.906 #define SPDK_CONFIG_CROSS_PREFIX 00:06:39.906 #undef SPDK_CONFIG_CRYPTO 00:06:39.906 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:39.906 #undef SPDK_CONFIG_CUSTOMOCF 00:06:39.906 #undef SPDK_CONFIG_DAOS 00:06:39.906 #define SPDK_CONFIG_DAOS_DIR 00:06:39.907 #define SPDK_CONFIG_DEBUG 1 00:06:39.907 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:39.907 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:39.907 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:39.907 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:39.907 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:39.907 #undef SPDK_CONFIG_DPDK_UADK 00:06:39.907 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:39.907 #define SPDK_CONFIG_EXAMPLES 1 00:06:39.907 #undef SPDK_CONFIG_FC 00:06:39.907 #define SPDK_CONFIG_FC_PATH 00:06:39.907 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:39.907 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:39.907 #undef SPDK_CONFIG_FUSE 00:06:39.907 #undef SPDK_CONFIG_FUZZER 00:06:39.907 #define SPDK_CONFIG_FUZZER_LIB 00:06:39.907 #undef SPDK_CONFIG_GOLANG 00:06:39.907 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:39.907 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:39.907 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:39.907 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:39.907 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:39.907 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:39.907 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:39.907 #define SPDK_CONFIG_IDXD 1 00:06:39.907 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:39.907 #undef SPDK_CONFIG_IPSEC_MB 00:06:39.907 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:39.907 #define SPDK_CONFIG_ISAL 1 00:06:39.907 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:39.907 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:39.907 #define SPDK_CONFIG_LIBDIR 00:06:39.907 #undef SPDK_CONFIG_LTO 00:06:39.907 #define SPDK_CONFIG_MAX_LCORES 128 00:06:39.907 #define SPDK_CONFIG_NVME_CUSE 1 00:06:39.907 #undef SPDK_CONFIG_OCF 00:06:39.907 #define SPDK_CONFIG_OCF_PATH 00:06:39.907 #define SPDK_CONFIG_OPENSSL_PATH 00:06:39.907 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:39.907 #define SPDK_CONFIG_PGO_DIR 00:06:39.907 #undef SPDK_CONFIG_PGO_USE 00:06:39.907 #define SPDK_CONFIG_PREFIX /usr/local 00:06:39.907 #undef SPDK_CONFIG_RAID5F 00:06:39.907 #undef SPDK_CONFIG_RBD 00:06:39.907 #define SPDK_CONFIG_RDMA 1 00:06:39.907 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:39.907 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:39.907 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:39.907 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:39.907 #define SPDK_CONFIG_SHARED 1 00:06:39.907 #undef SPDK_CONFIG_SMA 00:06:39.907 #define SPDK_CONFIG_TESTS 1 00:06:39.907 #undef SPDK_CONFIG_TSAN 00:06:39.907 #define SPDK_CONFIG_UBLK 1 00:06:39.907 #define SPDK_CONFIG_UBSAN 1 00:06:39.907 #undef SPDK_CONFIG_UNIT_TESTS 00:06:39.907 #undef SPDK_CONFIG_URING 00:06:39.907 #define SPDK_CONFIG_URING_PATH 00:06:39.907 #undef SPDK_CONFIG_URING_ZNS 00:06:39.907 #undef SPDK_CONFIG_USDT 00:06:39.907 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:39.907 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:39.907 #define SPDK_CONFIG_VFIO_USER 1 00:06:39.907 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:39.907 #define SPDK_CONFIG_VHOST 1 00:06:39.907 #define SPDK_CONFIG_VIRTIO 1 00:06:39.907 #undef SPDK_CONFIG_VTUNE 00:06:39.907 #define SPDK_CONFIG_VTUNE_DIR 00:06:39.907 #define SPDK_CONFIG_WERROR 1 00:06:39.907 #define SPDK_CONFIG_WPDK_DIR 00:06:39.907 #undef SPDK_CONFIG_XNVME 00:06:39.907 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:39.907 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:39.908 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2531876 ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2531876 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.72Lm3S 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.72Lm3S/tests/target /tmp/spdk.72Lm3S 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189592154112 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6382145536 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986338816 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=811008 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:39.909 * Looking for test storage... 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189592154112 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8596738048 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:39.909 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:39.910 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:40.169 20:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:45.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.444 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:45.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:45.445 Found net devices under 0000:86:00.0: cvl_0_0 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:45.445 Found net devices under 0000:86:00.1: cvl_0_1 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:45.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:06:45.445 00:06:45.445 --- 10.0.0.2 ping statistics --- 00:06:45.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.445 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:06:45.445 00:06:45.445 --- 10.0.0.1 ping statistics --- 00:06:45.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.445 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.445 ************************************ 00:06:45.445 START TEST nvmf_filesystem_no_in_capsule 00:06:45.445 ************************************ 00:06:45.445 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2534984 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2534984 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2534984 ']' 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.446 20:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.446 [2024-07-15 20:32:19.890445] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:45.446 [2024-07-15 20:32:19.890486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.705 [2024-07-15 20:32:19.947584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.705 [2024-07-15 20:32:20.026158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.705 [2024-07-15 20:32:20.026195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.705 [2024-07-15 20:32:20.026203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.705 [2024-07-15 20:32:20.026210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.705 [2024-07-15 20:32:20.026216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.705 [2024-07-15 20:32:20.026259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.705 [2024-07-15 20:32:20.026356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.705 [2024-07-15 20:32:20.026442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.705 [2024-07-15 20:32:20.026444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.273 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.273 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.274 [2024-07-15 20:32:20.738286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.274 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.532 Malloc1 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.532 [2024-07-15 20:32:20.884153] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.532 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:46.532 { 00:06:46.532 "name": "Malloc1", 00:06:46.532 "aliases": [ 00:06:46.532 "277ac8af-dedd-4d24-93ae-268210933772" 00:06:46.532 ], 00:06:46.532 "product_name": "Malloc disk", 00:06:46.532 "block_size": 512, 00:06:46.532 "num_blocks": 1048576, 00:06:46.532 "uuid": "277ac8af-dedd-4d24-93ae-268210933772", 00:06:46.532 "assigned_rate_limits": { 00:06:46.532 "rw_ios_per_sec": 0, 00:06:46.532 "rw_mbytes_per_sec": 0, 00:06:46.532 "r_mbytes_per_sec": 0, 00:06:46.532 "w_mbytes_per_sec": 0 00:06:46.532 }, 00:06:46.532 "claimed": true, 00:06:46.532 "claim_type": "exclusive_write", 00:06:46.532 "zoned": false, 00:06:46.532 "supported_io_types": { 00:06:46.532 "read": true, 00:06:46.532 "write": true, 00:06:46.532 "unmap": true, 00:06:46.532 "flush": true, 00:06:46.532 "reset": true, 00:06:46.532 "nvme_admin": false, 00:06:46.532 "nvme_io": false, 00:06:46.532 "nvme_io_md": false, 00:06:46.532 "write_zeroes": true, 00:06:46.532 "zcopy": true, 00:06:46.532 "get_zone_info": false, 00:06:46.533 "zone_management": false, 00:06:46.533 "zone_append": false, 00:06:46.533 "compare": false, 00:06:46.533 "compare_and_write": false, 00:06:46.533 "abort": true, 00:06:46.533 "seek_hole": false, 00:06:46.533 "seek_data": false, 00:06:46.533 "copy": true, 00:06:46.533 "nvme_iov_md": false 00:06:46.533 }, 00:06:46.533 "memory_domains": [ 00:06:46.533 { 00:06:46.533 "dma_device_id": "system", 00:06:46.533 "dma_device_type": 1 00:06:46.533 }, 00:06:46.533 { 00:06:46.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.533 "dma_device_type": 2 00:06:46.533 } 00:06:46.533 ], 00:06:46.533 "driver_specific": {} 00:06:46.533 } 00:06:46.533 ]' 00:06:46.533 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:46.533 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:46.533 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:46.533 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:46.533 20:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:46.533 20:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:46.533 20:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:46.533 20:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:47.912 20:32:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:47.912 20:32:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:47.912 20:32:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:47.912 20:32:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:47.912 20:32:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:49.817 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:50.385 20:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.322 ************************************ 00:06:51.322 START TEST filesystem_ext4 00:06:51.322 ************************************ 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:51.322 20:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:51.322 mke2fs 1.46.5 (30-Dec-2021) 00:06:51.322 Discarding device blocks: 0/522240 done 00:06:51.322 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:51.322 Filesystem UUID: 038ab952-70eb-4032-afb2-174d2d6865f0 00:06:51.322 Superblock backups stored on blocks: 00:06:51.322 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:51.322 00:06:51.322 Allocating group tables: 0/64 done 00:06:51.322 Writing inode tables: 0/64 done 00:06:52.258 Creating journal (8192 blocks): done 00:06:52.258 Writing superblocks and filesystem accounting information: 0/64 done 00:06:52.258 00:06:52.258 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:52.258 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2534984 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.554 00:06:52.554 real 0m1.231s 00:06:52.554 user 0m0.031s 00:06:52.554 sys 0m0.060s 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:52.554 ************************************ 00:06:52.554 END TEST filesystem_ext4 00:06:52.554 ************************************ 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.554 ************************************ 00:06:52.554 START TEST filesystem_btrfs 00:06:52.554 ************************************ 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:52.554 20:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:52.813 btrfs-progs v6.6.2 00:06:52.813 See https://btrfs.readthedocs.io for more information. 00:06:52.813 00:06:52.813 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:52.813 NOTE: several default settings have changed in version 5.15, please make sure 00:06:52.813 this does not affect your deployments: 00:06:52.813 - DUP for metadata (-m dup) 00:06:52.813 - enabled no-holes (-O no-holes) 00:06:52.813 - enabled free-space-tree (-R free-space-tree) 00:06:52.813 00:06:52.813 Label: (null) 00:06:52.813 UUID: 4db50cdf-b2d4-4a7a-9e73-70ebe70d3541 00:06:52.813 Node size: 16384 00:06:52.813 Sector size: 4096 00:06:52.813 Filesystem size: 510.00MiB 00:06:52.813 Block group profiles: 00:06:52.813 Data: single 8.00MiB 00:06:52.813 Metadata: DUP 32.00MiB 00:06:52.813 System: DUP 8.00MiB 00:06:52.813 SSD detected: yes 00:06:52.813 Zoned device: no 00:06:52.813 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:52.813 Runtime features: free-space-tree 00:06:52.813 Checksum: crc32c 00:06:52.813 Number of devices: 1 00:06:52.813 Devices: 00:06:52.813 ID SIZE PATH 00:06:52.813 1 510.00MiB /dev/nvme0n1p1 00:06:52.813 00:06:52.813 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:52.813 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:53.749 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:53.749 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:53.749 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:53.749 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:53.749 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:53.749 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:53.749 20:32:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2534984 00:06:53.749 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:53.749 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:53.749 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:53.749 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:53.749 00:06:53.749 real 0m1.103s 00:06:53.749 user 0m0.026s 00:06:53.749 sys 0m0.124s 00:06:53.749 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.749 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:53.749 ************************************ 00:06:53.749 END TEST filesystem_btrfs 00:06:53.749 ************************************ 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.750 ************************************ 00:06:53.750 START TEST filesystem_xfs 00:06:53.750 ************************************ 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:53.750 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:53.750 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:53.750 = sectsz=512 attr=2, projid32bit=1 00:06:53.750 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:53.750 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:53.750 data = bsize=4096 blocks=130560, imaxpct=25 00:06:53.750 = sunit=0 swidth=0 blks 00:06:53.750 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:53.750 log =internal log bsize=4096 blocks=16384, version=2 00:06:53.750 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:53.750 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:54.686 Discarding blocks...Done. 00:06:54.686 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:54.686 20:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2534984 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:57.222 00:06:57.222 real 0m3.224s 00:06:57.222 user 0m0.024s 00:06:57.222 sys 0m0.070s 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:57.222 ************************************ 00:06:57.222 END TEST filesystem_xfs 00:06:57.222 ************************************ 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:57.222 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:57.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.479 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:57.479 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:57.479 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:57.479 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.479 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:57.479 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.479 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2534984 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2534984 ']' 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2534984 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2534984 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2534984' 00:06:57.480 killing process with pid 2534984 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2534984 00:06:57.480 20:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2534984 00:06:57.738 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:57.738 00:06:57.738 real 0m12.374s 00:06:57.738 user 0m48.661s 00:06:57.738 sys 0m1.158s 00:06:57.738 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.738 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.738 ************************************ 00:06:57.738 END TEST nvmf_filesystem_no_in_capsule 00:06:57.738 ************************************ 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.998 ************************************ 00:06:57.998 START TEST nvmf_filesystem_in_capsule 00:06:57.998 ************************************ 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2537227 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2537227 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2537227 ']' 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.998 20:32:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.998 [2024-07-15 20:32:32.336463] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:06:57.998 [2024-07-15 20:32:32.336505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.998 [2024-07-15 20:32:32.394131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.998 [2024-07-15 20:32:32.468979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:57.998 [2024-07-15 20:32:32.469020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:57.998 [2024-07-15 20:32:32.469028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:57.998 [2024-07-15 20:32:32.469034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:57.998 [2024-07-15 20:32:32.469040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:57.998 [2024-07-15 20:32:32.469097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.998 [2024-07-15 20:32:32.469116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.998 [2024-07-15 20:32:32.469206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.998 [2024-07-15 20:32:32.469207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.933 [2024-07-15 20:32:33.185230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.933 Malloc1 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.933 [2024-07-15 20:32:33.335007] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:58.933 { 00:06:58.933 "name": "Malloc1", 00:06:58.933 "aliases": [ 00:06:58.933 "79c9597f-2bcf-406f-8f45-75944301ccc5" 00:06:58.933 ], 00:06:58.933 "product_name": "Malloc disk", 00:06:58.933 "block_size": 512, 00:06:58.933 "num_blocks": 1048576, 00:06:58.933 "uuid": "79c9597f-2bcf-406f-8f45-75944301ccc5", 00:06:58.933 "assigned_rate_limits": { 00:06:58.933 "rw_ios_per_sec": 0, 00:06:58.933 "rw_mbytes_per_sec": 0, 00:06:58.933 "r_mbytes_per_sec": 0, 00:06:58.933 "w_mbytes_per_sec": 0 00:06:58.933 }, 00:06:58.933 "claimed": true, 00:06:58.933 "claim_type": "exclusive_write", 00:06:58.933 "zoned": false, 00:06:58.933 "supported_io_types": { 00:06:58.933 "read": true, 00:06:58.933 "write": true, 00:06:58.933 "unmap": true, 00:06:58.933 "flush": true, 00:06:58.933 "reset": true, 00:06:58.933 "nvme_admin": false, 00:06:58.933 "nvme_io": false, 00:06:58.933 "nvme_io_md": false, 00:06:58.933 "write_zeroes": true, 00:06:58.933 "zcopy": true, 00:06:58.933 "get_zone_info": false, 00:06:58.933 "zone_management": false, 00:06:58.933 "zone_append": false, 00:06:58.933 "compare": false, 00:06:58.933 "compare_and_write": false, 00:06:58.933 "abort": true, 00:06:58.933 "seek_hole": false, 00:06:58.933 "seek_data": false, 00:06:58.933 "copy": true, 00:06:58.933 "nvme_iov_md": false 00:06:58.933 }, 00:06:58.933 "memory_domains": [ 00:06:58.933 { 00:06:58.933 "dma_device_id": "system", 00:06:58.933 "dma_device_type": 1 00:06:58.933 }, 00:06:58.933 { 00:06:58.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.933 "dma_device_type": 2 00:06:58.933 } 00:06:58.933 ], 00:06:58.933 "driver_specific": {} 00:06:58.933 } 00:06:58.933 ]' 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:58.933 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:59.191 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:59.191 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:59.191 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:59.191 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:59.191 20:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.566 20:32:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:00.566 20:32:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:00.566 20:32:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:00.566 20:32:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:00.566 20:32:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:02.466 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:02.725 20:32:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:02.983 20:32:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.920 ************************************ 00:07:03.920 START TEST filesystem_in_capsule_ext4 00:07:03.920 ************************************ 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:03.920 20:32:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:03.920 mke2fs 1.46.5 (30-Dec-2021) 00:07:03.920 Discarding device blocks: 0/522240 done 00:07:04.179 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:04.179 Filesystem UUID: 42e240c1-e8b8-4f9c-913b-6f320f43934e 00:07:04.179 Superblock backups stored on blocks: 00:07:04.179 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:04.179 00:07:04.179 Allocating group tables: 0/64 done 00:07:04.179 Writing inode tables: 0/64 done 00:07:07.469 Creating journal (8192 blocks): done 00:07:07.469 Writing superblocks and filesystem accounting information: 0/64 done 00:07:07.469 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2537227 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.469 00:07:07.469 real 0m3.301s 00:07:07.469 user 0m0.023s 00:07:07.469 sys 0m0.069s 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:07.469 ************************************ 00:07:07.469 END TEST filesystem_in_capsule_ext4 00:07:07.469 ************************************ 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:07.469 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.470 ************************************ 00:07:07.470 START TEST filesystem_in_capsule_btrfs 00:07:07.470 ************************************ 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.470 20:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:07.729 btrfs-progs v6.6.2 00:07:07.729 See https://btrfs.readthedocs.io for more information. 00:07:07.729 00:07:07.729 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:07.729 NOTE: several default settings have changed in version 5.15, please make sure 00:07:07.729 this does not affect your deployments: 00:07:07.729 - DUP for metadata (-m dup) 00:07:07.729 - enabled no-holes (-O no-holes) 00:07:07.729 - enabled free-space-tree (-R free-space-tree) 00:07:07.729 00:07:07.729 Label: (null) 00:07:07.729 UUID: 340ac549-1100-4683-9c42-175232de4609 00:07:07.729 Node size: 16384 00:07:07.729 Sector size: 4096 00:07:07.729 Filesystem size: 510.00MiB 00:07:07.729 Block group profiles: 00:07:07.729 Data: single 8.00MiB 00:07:07.729 Metadata: DUP 32.00MiB 00:07:07.729 System: DUP 8.00MiB 00:07:07.729 SSD detected: yes 00:07:07.729 Zoned device: no 00:07:07.729 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:07.729 Runtime features: free-space-tree 00:07:07.729 Checksum: crc32c 00:07:07.729 Number of devices: 1 00:07:07.729 Devices: 00:07:07.729 ID SIZE PATH 00:07:07.729 1 510.00MiB /dev/nvme0n1p1 00:07:07.729 00:07:07.729 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:07.729 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2537227 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.989 00:07:07.989 real 0m0.644s 00:07:07.989 user 0m0.033s 00:07:07.989 sys 0m0.118s 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:07.989 ************************************ 00:07:07.989 END TEST filesystem_in_capsule_btrfs 00:07:07.989 ************************************ 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.989 ************************************ 00:07:07.989 START TEST filesystem_in_capsule_xfs 00:07:07.989 ************************************ 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.989 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.990 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:07.990 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:07.990 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.990 20:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:07.990 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:07.990 = sectsz=512 attr=2, projid32bit=1 00:07:07.990 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:07.990 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:07.990 data = bsize=4096 blocks=130560, imaxpct=25 00:07:07.990 = sunit=0 swidth=0 blks 00:07:07.990 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:07.990 log =internal log bsize=4096 blocks=16384, version=2 00:07:07.990 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:07.990 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:09.374 Discarding blocks...Done. 00:07:09.374 20:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:09.374 20:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2537227 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.279 00:07:11.279 real 0m2.989s 00:07:11.279 user 0m0.028s 00:07:11.279 sys 0m0.067s 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:11.279 ************************************ 00:07:11.279 END TEST filesystem_in_capsule_xfs 00:07:11.279 ************************************ 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2537227 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2537227 ']' 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2537227 00:07:11.279 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:11.280 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.280 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2537227 00:07:11.280 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.280 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.280 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2537227' 00:07:11.280 killing process with pid 2537227 00:07:11.280 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2537227 00:07:11.280 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2537227 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:11.579 00:07:11.579 real 0m13.671s 00:07:11.579 user 0m53.757s 00:07:11.579 sys 0m1.245s 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.579 ************************************ 00:07:11.579 END TEST nvmf_filesystem_in_capsule 00:07:11.579 ************************************ 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.579 20:32:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.579 rmmod nvme_tcp 00:07:11.579 rmmod nvme_fabrics 00:07:11.579 rmmod nvme_keyring 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.579 20:32:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.117 20:32:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:14.117 00:07:14.117 real 0m33.998s 00:07:14.117 user 1m44.107s 00:07:14.117 sys 0m6.671s 00:07:14.117 20:32:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.117 20:32:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.117 ************************************ 00:07:14.117 END TEST nvmf_filesystem 00:07:14.117 ************************************ 00:07:14.117 20:32:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:14.117 20:32:48 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:14.117 20:32:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.117 20:32:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.117 20:32:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.117 ************************************ 00:07:14.117 START TEST nvmf_target_discovery 00:07:14.117 ************************************ 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:14.117 * Looking for test storage... 00:07:14.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.117 20:32:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:19.395 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:19.395 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:19.395 Found net devices under 0000:86:00.0: cvl_0_0 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:19.395 Found net devices under 0000:86:00.1: cvl_0_1 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.395 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:07:19.396 00:07:19.396 --- 10.0.0.2 ping statistics --- 00:07:19.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.396 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:07:19.396 00:07:19.396 --- 10.0.0.1 ping statistics --- 00:07:19.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.396 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2543157 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2543157 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2543157 ']' 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.396 20:32:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.396 [2024-07-15 20:32:53.738938] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:19.396 [2024-07-15 20:32:53.738981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.396 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.396 [2024-07-15 20:32:53.795351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.396 [2024-07-15 20:32:53.876386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.396 [2024-07-15 20:32:53.876422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.396 [2024-07-15 20:32:53.876429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.396 [2024-07-15 20:32:53.876436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.396 [2024-07-15 20:32:53.876441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.396 [2024-07-15 20:32:53.876489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.396 [2024-07-15 20:32:53.876586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.396 [2024-07-15 20:32:53.876649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.396 [2024-07-15 20:32:53.876651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.334 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.334 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:20.334 20:32:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.334 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:20.334 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 [2024-07-15 20:32:54.595233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 Null1 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 [2024-07-15 20:32:54.640683] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 Null2 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 Null3 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 Null4 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.335 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:20.595 00:07:20.595 Discovery Log Number of Records 6, Generation counter 6 00:07:20.595 =====Discovery Log Entry 0====== 00:07:20.595 trtype: tcp 00:07:20.595 adrfam: ipv4 00:07:20.595 subtype: current discovery subsystem 00:07:20.595 treq: not required 00:07:20.595 portid: 0 00:07:20.595 trsvcid: 4420 00:07:20.595 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:20.595 traddr: 10.0.0.2 00:07:20.595 eflags: explicit discovery connections, duplicate discovery information 00:07:20.595 sectype: none 00:07:20.595 =====Discovery Log Entry 1====== 00:07:20.595 trtype: tcp 00:07:20.595 adrfam: ipv4 00:07:20.595 subtype: nvme subsystem 00:07:20.595 treq: not required 00:07:20.595 portid: 0 00:07:20.595 trsvcid: 4420 00:07:20.595 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:20.595 traddr: 10.0.0.2 00:07:20.595 eflags: none 00:07:20.595 sectype: none 00:07:20.595 =====Discovery Log Entry 2====== 00:07:20.595 trtype: tcp 00:07:20.595 adrfam: ipv4 00:07:20.595 subtype: nvme subsystem 00:07:20.595 treq: not required 00:07:20.595 portid: 0 00:07:20.595 trsvcid: 4420 00:07:20.595 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:20.595 traddr: 10.0.0.2 00:07:20.595 eflags: none 00:07:20.595 sectype: none 00:07:20.595 =====Discovery Log Entry 3====== 00:07:20.595 trtype: tcp 00:07:20.595 adrfam: ipv4 00:07:20.595 subtype: nvme subsystem 00:07:20.595 treq: not required 00:07:20.595 portid: 0 00:07:20.595 trsvcid: 4420 00:07:20.595 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:20.595 traddr: 10.0.0.2 00:07:20.595 eflags: none 00:07:20.595 sectype: none 00:07:20.595 =====Discovery Log Entry 4====== 00:07:20.595 trtype: tcp 00:07:20.595 adrfam: ipv4 00:07:20.595 subtype: nvme subsystem 00:07:20.595 treq: not required 00:07:20.595 portid: 0 00:07:20.595 trsvcid: 4420 00:07:20.595 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:20.595 traddr: 10.0.0.2 00:07:20.595 eflags: none 00:07:20.595 sectype: none 00:07:20.595 =====Discovery Log Entry 5====== 00:07:20.595 trtype: tcp 00:07:20.595 adrfam: ipv4 00:07:20.595 subtype: discovery subsystem referral 00:07:20.595 treq: not required 00:07:20.595 portid: 0 00:07:20.595 trsvcid: 4430 00:07:20.595 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:20.595 traddr: 10.0.0.2 00:07:20.595 eflags: none 00:07:20.595 sectype: none 00:07:20.595 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:20.595 Perform nvmf subsystem discovery via RPC 00:07:20.595 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:20.595 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.595 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.595 [ 00:07:20.595 { 00:07:20.595 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:20.595 "subtype": "Discovery", 00:07:20.595 "listen_addresses": [ 00:07:20.595 { 00:07:20.595 "trtype": "TCP", 00:07:20.595 "adrfam": "IPv4", 00:07:20.595 "traddr": "10.0.0.2", 00:07:20.595 "trsvcid": "4420" 00:07:20.595 } 00:07:20.595 ], 00:07:20.595 "allow_any_host": true, 00:07:20.595 "hosts": [] 00:07:20.595 }, 00:07:20.595 { 00:07:20.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:20.595 "subtype": "NVMe", 00:07:20.595 "listen_addresses": [ 00:07:20.595 { 00:07:20.595 "trtype": "TCP", 00:07:20.595 "adrfam": "IPv4", 00:07:20.595 "traddr": "10.0.0.2", 00:07:20.595 "trsvcid": "4420" 00:07:20.595 } 00:07:20.595 ], 00:07:20.595 "allow_any_host": true, 00:07:20.595 "hosts": [], 00:07:20.595 "serial_number": "SPDK00000000000001", 00:07:20.595 "model_number": "SPDK bdev Controller", 00:07:20.595 "max_namespaces": 32, 00:07:20.595 "min_cntlid": 1, 00:07:20.595 "max_cntlid": 65519, 00:07:20.595 "namespaces": [ 00:07:20.595 { 00:07:20.595 "nsid": 1, 00:07:20.595 "bdev_name": "Null1", 00:07:20.595 "name": "Null1", 00:07:20.595 "nguid": "4DE29768FBF147AF9127EA7BB5695A61", 00:07:20.595 "uuid": "4de29768-fbf1-47af-9127-ea7bb5695a61" 00:07:20.595 } 00:07:20.595 ] 00:07:20.595 }, 00:07:20.595 { 00:07:20.595 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:20.595 "subtype": "NVMe", 00:07:20.595 "listen_addresses": [ 00:07:20.595 { 00:07:20.595 "trtype": "TCP", 00:07:20.595 "adrfam": "IPv4", 00:07:20.595 "traddr": "10.0.0.2", 00:07:20.595 "trsvcid": "4420" 00:07:20.595 } 00:07:20.595 ], 00:07:20.595 "allow_any_host": true, 00:07:20.595 "hosts": [], 00:07:20.595 "serial_number": "SPDK00000000000002", 00:07:20.595 "model_number": "SPDK bdev Controller", 00:07:20.595 "max_namespaces": 32, 00:07:20.595 "min_cntlid": 1, 00:07:20.595 "max_cntlid": 65519, 00:07:20.595 "namespaces": [ 00:07:20.595 { 00:07:20.595 "nsid": 1, 00:07:20.595 "bdev_name": "Null2", 00:07:20.595 "name": "Null2", 00:07:20.595 "nguid": "931309158AE4474285C3F81537E10772", 00:07:20.595 "uuid": "93130915-8ae4-4742-85c3-f81537e10772" 00:07:20.595 } 00:07:20.595 ] 00:07:20.595 }, 00:07:20.595 { 00:07:20.595 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:20.595 "subtype": "NVMe", 00:07:20.595 "listen_addresses": [ 00:07:20.595 { 00:07:20.595 "trtype": "TCP", 00:07:20.595 "adrfam": "IPv4", 00:07:20.595 "traddr": "10.0.0.2", 00:07:20.595 "trsvcid": "4420" 00:07:20.595 } 00:07:20.595 ], 00:07:20.595 "allow_any_host": true, 00:07:20.595 "hosts": [], 00:07:20.595 "serial_number": "SPDK00000000000003", 00:07:20.595 "model_number": "SPDK bdev Controller", 00:07:20.595 "max_namespaces": 32, 00:07:20.595 "min_cntlid": 1, 00:07:20.595 "max_cntlid": 65519, 00:07:20.595 "namespaces": [ 00:07:20.595 { 00:07:20.596 "nsid": 1, 00:07:20.596 "bdev_name": "Null3", 00:07:20.596 "name": "Null3", 00:07:20.596 "nguid": "A6805D5E329E4E6DBD907BF1EF62D796", 00:07:20.596 "uuid": "a6805d5e-329e-4e6d-bd90-7bf1ef62d796" 00:07:20.596 } 00:07:20.596 ] 00:07:20.596 }, 00:07:20.596 { 00:07:20.596 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:20.596 "subtype": "NVMe", 00:07:20.596 "listen_addresses": [ 00:07:20.596 { 00:07:20.596 "trtype": "TCP", 00:07:20.596 "adrfam": "IPv4", 00:07:20.596 "traddr": "10.0.0.2", 00:07:20.596 "trsvcid": "4420" 00:07:20.596 } 00:07:20.596 ], 00:07:20.596 "allow_any_host": true, 00:07:20.596 "hosts": [], 00:07:20.596 "serial_number": "SPDK00000000000004", 00:07:20.596 "model_number": "SPDK bdev Controller", 00:07:20.596 "max_namespaces": 32, 00:07:20.596 "min_cntlid": 1, 00:07:20.596 "max_cntlid": 65519, 00:07:20.596 "namespaces": [ 00:07:20.596 { 00:07:20.596 "nsid": 1, 00:07:20.596 "bdev_name": "Null4", 00:07:20.596 "name": "Null4", 00:07:20.596 "nguid": "6EF0F1D60980406091AD361F0E484192", 00:07:20.596 "uuid": "6ef0f1d6-0980-4060-91ad-361f0e484192" 00:07:20.596 } 00:07:20.596 ] 00:07:20.596 } 00:07:20.596 ] 00:07:20.596 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:20.596 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:20.596 20:32:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.596 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.596 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.855 rmmod nvme_tcp 00:07:20.855 rmmod nvme_fabrics 00:07:20.855 rmmod nvme_keyring 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2543157 ']' 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2543157 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2543157 ']' 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2543157 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2543157 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2543157' 00:07:20.855 killing process with pid 2543157 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2543157 00:07:20.855 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2543157 00:07:21.114 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.114 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.114 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.114 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.114 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.114 20:32:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.114 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.114 20:32:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.019 20:32:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:23.019 00:07:23.019 real 0m9.284s 00:07:23.019 user 0m7.773s 00:07:23.019 sys 0m4.360s 00:07:23.019 20:32:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.019 20:32:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:23.019 ************************************ 00:07:23.019 END TEST nvmf_target_discovery 00:07:23.019 ************************************ 00:07:23.019 20:32:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:23.019 20:32:57 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:23.019 20:32:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:23.019 20:32:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.019 20:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.279 ************************************ 00:07:23.279 START TEST nvmf_referrals 00:07:23.279 ************************************ 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:23.279 * Looking for test storage... 00:07:23.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.279 20:32:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.554 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:28.555 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:28.555 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:28.555 Found net devices under 0000:86:00.0: cvl_0_0 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:28.555 Found net devices under 0000:86:00.1: cvl_0_1 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:07:28.555 00:07:28.555 --- 10.0.0.2 ping statistics --- 00:07:28.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.555 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:07:28.555 00:07:28.555 --- 10.0.0.1 ping statistics --- 00:07:28.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.555 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2546854 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2546854 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2546854 ']' 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.555 20:33:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.555 [2024-07-15 20:33:02.767106] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:28.555 [2024-07-15 20:33:02.767150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.555 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.555 [2024-07-15 20:33:02.824742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.555 [2024-07-15 20:33:02.910519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.555 [2024-07-15 20:33:02.910553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.555 [2024-07-15 20:33:02.910560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.555 [2024-07-15 20:33:02.910566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.555 [2024-07-15 20:33:02.910572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.555 [2024-07-15 20:33:02.910615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.555 [2024-07-15 20:33:02.910632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.555 [2024-07-15 20:33:02.910723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.555 [2024-07-15 20:33:02.910725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.124 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.124 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:29.124 20:33:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.124 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.124 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.384 [2024-07-15 20:33:03.619994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.384 [2024-07-15 20:33:03.633387] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:29.384 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:29.643 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:29.643 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:29.643 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:29.643 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.643 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:29.644 20:33:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.644 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.644 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:29.644 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:29.644 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:29.644 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:29.644 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:29.644 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:29.644 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:29.903 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.161 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:30.419 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:30.678 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:30.678 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:30.678 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:30.678 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:30.678 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:30.678 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:30.678 20:33:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:30.678 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.937 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.937 rmmod nvme_tcp 00:07:30.937 rmmod nvme_fabrics 00:07:30.938 rmmod nvme_keyring 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2546854 ']' 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2546854 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2546854 ']' 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2546854 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2546854 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2546854' 00:07:30.938 killing process with pid 2546854 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2546854 00:07:30.938 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2546854 00:07:31.197 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.197 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.197 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.197 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.197 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.197 20:33:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.197 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.197 20:33:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.735 20:33:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.735 00:07:33.735 real 0m10.087s 00:07:33.735 user 0m12.692s 00:07:33.735 sys 0m4.516s 00:07:33.735 20:33:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.735 20:33:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.735 ************************************ 00:07:33.735 END TEST nvmf_referrals 00:07:33.735 ************************************ 00:07:33.735 20:33:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.735 20:33:07 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:33.735 20:33:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.735 20:33:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.735 20:33:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.735 ************************************ 00:07:33.735 START TEST nvmf_connect_disconnect 00:07:33.735 ************************************ 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:33.735 * Looking for test storage... 00:07:33.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.735 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.736 20:33:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.043 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.044 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.044 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.044 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.044 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:07:39.044 00:07:39.044 --- 10.0.0.2 ping statistics --- 00:07:39.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.044 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:07:39.044 00:07:39.044 --- 10.0.0.1 ping statistics --- 00:07:39.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.044 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2551307 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2551307 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2551307 ']' 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.044 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.045 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.045 20:33:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.045 [2024-07-15 20:33:13.463538] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:07:39.045 [2024-07-15 20:33:13.463583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.045 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.045 [2024-07-15 20:33:13.522757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.303 [2024-07-15 20:33:13.604305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.303 [2024-07-15 20:33:13.604338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.303 [2024-07-15 20:33:13.604346] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.303 [2024-07-15 20:33:13.604352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.303 [2024-07-15 20:33:13.604358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.303 [2024-07-15 20:33:13.604402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.303 [2024-07-15 20:33:13.604422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.303 [2024-07-15 20:33:13.604506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.303 [2024-07-15 20:33:13.604508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.870 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.870 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:39.870 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.871 [2024-07-15 20:33:14.324318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:39.871 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:40.130 [2024-07-15 20:33:14.376159] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:40.130 20:33:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:43.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.568 rmmod nvme_tcp 00:07:56.568 rmmod nvme_fabrics 00:07:56.568 rmmod nvme_keyring 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2551307 ']' 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2551307 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2551307 ']' 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2551307 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2551307 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2551307' 00:07:56.568 killing process with pid 2551307 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2551307 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2551307 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.568 20:33:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.477 20:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:58.477 00:07:58.477 real 0m25.124s 00:07:58.477 user 1m9.734s 00:07:58.477 sys 0m5.380s 00:07:58.477 20:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.477 20:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:58.477 ************************************ 00:07:58.477 END TEST nvmf_connect_disconnect 00:07:58.477 ************************************ 00:07:58.477 20:33:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:58.477 20:33:32 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:58.477 20:33:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.477 20:33:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.477 20:33:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.477 ************************************ 00:07:58.477 START TEST nvmf_multitarget 00:07:58.477 ************************************ 00:07:58.477 20:33:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:58.768 * Looking for test storage... 00:07:58.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.768 20:33:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:04.038 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.038 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:04.039 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:04.039 Found net devices under 0000:86:00.0: cvl_0_0 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:04.039 Found net devices under 0000:86:00.1: cvl_0_1 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.039 20:33:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:08:04.039 00:08:04.039 --- 10.0.0.2 ping statistics --- 00:08:04.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.039 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:08:04.039 00:08:04.039 --- 10.0.0.1 ping statistics --- 00:08:04.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.039 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2557708 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2557708 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2557708 ']' 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.039 20:33:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:04.039 [2024-07-15 20:33:38.220857] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:04.039 [2024-07-15 20:33:38.220898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.039 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.039 [2024-07-15 20:33:38.278605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.039 [2024-07-15 20:33:38.359703] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.039 [2024-07-15 20:33:38.359738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.039 [2024-07-15 20:33:38.359745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.039 [2024-07-15 20:33:38.359752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.039 [2024-07-15 20:33:38.359760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.039 [2024-07-15 20:33:38.359800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.039 [2024-07-15 20:33:38.359887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.039 [2024-07-15 20:33:38.359973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.039 [2024-07-15 20:33:38.359975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:04.607 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:04.866 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:04.866 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:04.866 "nvmf_tgt_1" 00:08:04.866 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:05.124 "nvmf_tgt_2" 00:08:05.124 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:05.124 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:05.124 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:05.124 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:05.124 true 00:08:05.124 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:05.383 true 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.383 rmmod nvme_tcp 00:08:05.383 rmmod nvme_fabrics 00:08:05.383 rmmod nvme_keyring 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2557708 ']' 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2557708 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2557708 ']' 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2557708 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:05.383 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2557708 00:08:05.643 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:05.643 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:05.643 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2557708' 00:08:05.643 killing process with pid 2557708 00:08:05.643 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2557708 00:08:05.643 20:33:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2557708 00:08:05.643 20:33:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.643 20:33:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.643 20:33:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.643 20:33:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.643 20:33:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.643 20:33:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.643 20:33:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.643 20:33:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.181 20:33:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.181 00:08:08.181 real 0m9.266s 00:08:08.181 user 0m9.035s 00:08:08.181 sys 0m4.345s 00:08:08.181 20:33:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.181 20:33:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:08.181 ************************************ 00:08:08.181 END TEST nvmf_multitarget 00:08:08.181 ************************************ 00:08:08.181 20:33:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:08.181 20:33:42 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:08.181 20:33:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.181 20:33:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.181 20:33:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.181 ************************************ 00:08:08.181 START TEST nvmf_rpc 00:08:08.181 ************************************ 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:08.181 * Looking for test storage... 00:08:08.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.181 20:33:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.182 20:33:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:13.455 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.455 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:13.456 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:13.456 Found net devices under 0000:86:00.0: cvl_0_0 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:13.456 Found net devices under 0000:86:00.1: cvl_0_1 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:08:13.456 00:08:13.456 --- 10.0.0.2 ping statistics --- 00:08:13.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.456 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:08:13.456 00:08:13.456 --- 10.0.0.1 ping statistics --- 00:08:13.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.456 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2561488 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2561488 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2561488 ']' 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.456 20:33:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.456 [2024-07-15 20:33:47.661520] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:13.456 [2024-07-15 20:33:47.661560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.456 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.456 [2024-07-15 20:33:47.718293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.456 [2024-07-15 20:33:47.791147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.456 [2024-07-15 20:33:47.791189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.456 [2024-07-15 20:33:47.791196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.456 [2024-07-15 20:33:47.791205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.456 [2024-07-15 20:33:47.791210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.456 [2024-07-15 20:33:47.791304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.456 [2024-07-15 20:33:47.791419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.456 [2024-07-15 20:33:47.791485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.456 [2024-07-15 20:33:47.791487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.023 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.023 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:14.023 20:33:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.023 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.023 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:14.282 "tick_rate": 2300000000, 00:08:14.282 "poll_groups": [ 00:08:14.282 { 00:08:14.282 "name": "nvmf_tgt_poll_group_000", 00:08:14.282 "admin_qpairs": 0, 00:08:14.282 "io_qpairs": 0, 00:08:14.282 "current_admin_qpairs": 0, 00:08:14.282 "current_io_qpairs": 0, 00:08:14.282 "pending_bdev_io": 0, 00:08:14.282 "completed_nvme_io": 0, 00:08:14.282 "transports": [] 00:08:14.282 }, 00:08:14.282 { 00:08:14.282 "name": "nvmf_tgt_poll_group_001", 00:08:14.282 "admin_qpairs": 0, 00:08:14.282 "io_qpairs": 0, 00:08:14.282 "current_admin_qpairs": 0, 00:08:14.282 "current_io_qpairs": 0, 00:08:14.282 "pending_bdev_io": 0, 00:08:14.282 "completed_nvme_io": 0, 00:08:14.282 "transports": [] 00:08:14.282 }, 00:08:14.282 { 00:08:14.282 "name": "nvmf_tgt_poll_group_002", 00:08:14.282 "admin_qpairs": 0, 00:08:14.282 "io_qpairs": 0, 00:08:14.282 "current_admin_qpairs": 0, 00:08:14.282 "current_io_qpairs": 0, 00:08:14.282 "pending_bdev_io": 0, 00:08:14.282 "completed_nvme_io": 0, 00:08:14.282 "transports": [] 00:08:14.282 }, 00:08:14.282 { 00:08:14.282 "name": "nvmf_tgt_poll_group_003", 00:08:14.282 "admin_qpairs": 0, 00:08:14.282 "io_qpairs": 0, 00:08:14.282 "current_admin_qpairs": 0, 00:08:14.282 "current_io_qpairs": 0, 00:08:14.282 "pending_bdev_io": 0, 00:08:14.282 "completed_nvme_io": 0, 00:08:14.282 "transports": [] 00:08:14.282 } 00:08:14.282 ] 00:08:14.282 }' 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.282 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.283 [2024-07-15 20:33:48.621553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:14.283 "tick_rate": 2300000000, 00:08:14.283 "poll_groups": [ 00:08:14.283 { 00:08:14.283 "name": "nvmf_tgt_poll_group_000", 00:08:14.283 "admin_qpairs": 0, 00:08:14.283 "io_qpairs": 0, 00:08:14.283 "current_admin_qpairs": 0, 00:08:14.283 "current_io_qpairs": 0, 00:08:14.283 "pending_bdev_io": 0, 00:08:14.283 "completed_nvme_io": 0, 00:08:14.283 "transports": [ 00:08:14.283 { 00:08:14.283 "trtype": "TCP" 00:08:14.283 } 00:08:14.283 ] 00:08:14.283 }, 00:08:14.283 { 00:08:14.283 "name": "nvmf_tgt_poll_group_001", 00:08:14.283 "admin_qpairs": 0, 00:08:14.283 "io_qpairs": 0, 00:08:14.283 "current_admin_qpairs": 0, 00:08:14.283 "current_io_qpairs": 0, 00:08:14.283 "pending_bdev_io": 0, 00:08:14.283 "completed_nvme_io": 0, 00:08:14.283 "transports": [ 00:08:14.283 { 00:08:14.283 "trtype": "TCP" 00:08:14.283 } 00:08:14.283 ] 00:08:14.283 }, 00:08:14.283 { 00:08:14.283 "name": "nvmf_tgt_poll_group_002", 00:08:14.283 "admin_qpairs": 0, 00:08:14.283 "io_qpairs": 0, 00:08:14.283 "current_admin_qpairs": 0, 00:08:14.283 "current_io_qpairs": 0, 00:08:14.283 "pending_bdev_io": 0, 00:08:14.283 "completed_nvme_io": 0, 00:08:14.283 "transports": [ 00:08:14.283 { 00:08:14.283 "trtype": "TCP" 00:08:14.283 } 00:08:14.283 ] 00:08:14.283 }, 00:08:14.283 { 00:08:14.283 "name": "nvmf_tgt_poll_group_003", 00:08:14.283 "admin_qpairs": 0, 00:08:14.283 "io_qpairs": 0, 00:08:14.283 "current_admin_qpairs": 0, 00:08:14.283 "current_io_qpairs": 0, 00:08:14.283 "pending_bdev_io": 0, 00:08:14.283 "completed_nvme_io": 0, 00:08:14.283 "transports": [ 00:08:14.283 { 00:08:14.283 "trtype": "TCP" 00:08:14.283 } 00:08:14.283 ] 00:08:14.283 } 00:08:14.283 ] 00:08:14.283 }' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.283 Malloc1 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.283 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 [2024-07-15 20:33:48.793719] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:14.542 [2024-07-15 20:33:48.818346] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:14.542 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:14.542 could not add new controller: failed to write to nvme-fabrics device 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.542 20:33:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.478 20:33:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.478 20:33:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:15.478 20:33:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.478 20:33:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:15.478 20:33:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:18.033 20:33:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:18.033 20:33:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:18.033 20:33:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:18.033 20:33:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:18.033 20:33:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:18.033 20:33:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:18.033 20:33:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:18.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.033 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.034 [2024-07-15 20:33:52.122789] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:18.034 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:18.034 could not add new controller: failed to write to nvme-fabrics device 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.034 20:33:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.967 20:33:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.967 20:33:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:18.967 20:33:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.967 20:33:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:18.967 20:33:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:20.918 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:20.918 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:20.918 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.918 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:20.918 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.918 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:20.918 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.176 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.177 [2024-07-15 20:33:55.526381] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.177 20:33:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:22.607 20:33:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.607 20:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:22.607 20:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.607 20:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:22.607 20:33:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:24.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.512 [2024-07-15 20:33:58.908909] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.512 20:33:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:25.890 20:34:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:25.890 20:34:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:25.890 20:34:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.890 20:34:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:25.890 20:34:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:27.794 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:27.794 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:27.794 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.795 [2024-07-15 20:34:02.255703] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.795 20:34:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:29.172 20:34:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:29.172 20:34:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:29.172 20:34:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:29.172 20:34:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:29.172 20:34:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:31.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.077 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.336 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 [2024-07-15 20:34:05.584002] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.337 20:34:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.273 20:34:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.273 20:34:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:32.273 20:34:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.273 20:34:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:32.273 20:34:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.807 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.808 [2024-07-15 20:34:08.865514] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.808 20:34:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.744 20:34:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.744 20:34:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.744 20:34:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.744 20:34:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.744 20:34:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.657 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.657 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.657 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.657 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.657 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.657 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:37.657 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 [2024-07-15 20:34:12.231432] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 [2024-07-15 20:34:12.279531] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 [2024-07-15 20:34:12.331666] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 [2024-07-15 20:34:12.379843] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.917 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.918 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.918 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.918 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.918 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 [2024-07-15 20:34:12.428014] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:38.177 "tick_rate": 2300000000, 00:08:38.177 "poll_groups": [ 00:08:38.177 { 00:08:38.177 "name": "nvmf_tgt_poll_group_000", 00:08:38.177 "admin_qpairs": 2, 00:08:38.177 "io_qpairs": 168, 00:08:38.177 "current_admin_qpairs": 0, 00:08:38.177 "current_io_qpairs": 0, 00:08:38.177 "pending_bdev_io": 0, 00:08:38.177 "completed_nvme_io": 269, 00:08:38.177 "transports": [ 00:08:38.177 { 00:08:38.177 "trtype": "TCP" 00:08:38.177 } 00:08:38.177 ] 00:08:38.177 }, 00:08:38.177 { 00:08:38.177 "name": "nvmf_tgt_poll_group_001", 00:08:38.177 "admin_qpairs": 2, 00:08:38.177 "io_qpairs": 168, 00:08:38.177 "current_admin_qpairs": 0, 00:08:38.177 "current_io_qpairs": 0, 00:08:38.177 "pending_bdev_io": 0, 00:08:38.177 "completed_nvme_io": 268, 00:08:38.177 "transports": [ 00:08:38.177 { 00:08:38.177 "trtype": "TCP" 00:08:38.177 } 00:08:38.177 ] 00:08:38.177 }, 00:08:38.177 { 00:08:38.177 "name": "nvmf_tgt_poll_group_002", 00:08:38.177 "admin_qpairs": 1, 00:08:38.177 "io_qpairs": 168, 00:08:38.177 "current_admin_qpairs": 0, 00:08:38.177 "current_io_qpairs": 0, 00:08:38.177 "pending_bdev_io": 0, 00:08:38.177 "completed_nvme_io": 267, 00:08:38.177 "transports": [ 00:08:38.177 { 00:08:38.177 "trtype": "TCP" 00:08:38.177 } 00:08:38.177 ] 00:08:38.177 }, 00:08:38.177 { 00:08:38.177 "name": "nvmf_tgt_poll_group_003", 00:08:38.177 "admin_qpairs": 2, 00:08:38.177 "io_qpairs": 168, 00:08:38.177 "current_admin_qpairs": 0, 00:08:38.177 "current_io_qpairs": 0, 00:08:38.177 "pending_bdev_io": 0, 00:08:38.177 "completed_nvme_io": 218, 00:08:38.177 "transports": [ 00:08:38.177 { 00:08:38.177 "trtype": "TCP" 00:08:38.177 } 00:08:38.177 ] 00:08:38.177 } 00:08:38.177 ] 00:08:38.177 }' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.177 rmmod nvme_tcp 00:08:38.177 rmmod nvme_fabrics 00:08:38.177 rmmod nvme_keyring 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2561488 ']' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2561488 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2561488 ']' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2561488 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.177 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2561488 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2561488' 00:08:38.437 killing process with pid 2561488 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2561488 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2561488 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.437 20:34:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.973 20:34:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.973 00:08:40.973 real 0m32.728s 00:08:40.973 user 1m41.492s 00:08:40.973 sys 0m5.785s 00:08:40.973 20:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.973 20:34:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.973 ************************************ 00:08:40.973 END TEST nvmf_rpc 00:08:40.973 ************************************ 00:08:40.973 20:34:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:40.973 20:34:14 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:40.973 20:34:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:40.973 20:34:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.973 20:34:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.973 ************************************ 00:08:40.973 START TEST nvmf_invalid 00:08:40.973 ************************************ 00:08:40.973 20:34:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:40.973 * Looking for test storage... 00:08:40.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.973 20:34:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.973 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:40.973 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.973 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.973 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.973 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.973 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.973 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.974 20:34:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:45.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.224 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:45.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:45.225 Found net devices under 0000:86:00.0: cvl_0_0 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:45.225 Found net devices under 0000:86:00.1: cvl_0_1 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.225 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:08:45.484 00:08:45.484 --- 10.0.0.2 ping statistics --- 00:08:45.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.484 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:08:45.484 00:08:45.484 --- 10.0.0.1 ping statistics --- 00:08:45.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.484 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2569099 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2569099 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2569099 ']' 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.484 20:34:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:45.484 [2024-07-15 20:34:19.860769] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:45.484 [2024-07-15 20:34:19.860811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.484 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.484 [2024-07-15 20:34:19.916382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.742 [2024-07-15 20:34:19.997037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.743 [2024-07-15 20:34:19.997071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.743 [2024-07-15 20:34:19.997078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.743 [2024-07-15 20:34:19.997084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.743 [2024-07-15 20:34:19.997089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.743 [2024-07-15 20:34:19.997123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.743 [2024-07-15 20:34:19.997230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.743 [2024-07-15 20:34:19.997246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.743 [2024-07-15 20:34:19.997248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.309 20:34:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.309 20:34:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:46.309 20:34:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.309 20:34:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.309 20:34:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:46.309 20:34:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.309 20:34:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:46.309 20:34:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28299 00:08:46.567 [2024-07-15 20:34:20.877765] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:46.567 20:34:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:46.567 { 00:08:46.567 "nqn": "nqn.2016-06.io.spdk:cnode28299", 00:08:46.567 "tgt_name": "foobar", 00:08:46.567 "method": "nvmf_create_subsystem", 00:08:46.567 "req_id": 1 00:08:46.567 } 00:08:46.567 Got JSON-RPC error response 00:08:46.567 response: 00:08:46.567 { 00:08:46.567 "code": -32603, 00:08:46.567 "message": "Unable to find target foobar" 00:08:46.567 }' 00:08:46.567 20:34:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:46.567 { 00:08:46.567 "nqn": "nqn.2016-06.io.spdk:cnode28299", 00:08:46.567 "tgt_name": "foobar", 00:08:46.567 "method": "nvmf_create_subsystem", 00:08:46.567 "req_id": 1 00:08:46.567 } 00:08:46.567 Got JSON-RPC error response 00:08:46.567 response: 00:08:46.567 { 00:08:46.567 "code": -32603, 00:08:46.567 "message": "Unable to find target foobar" 00:08:46.567 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:46.567 20:34:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:46.567 20:34:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1171 00:08:46.826 [2024-07-15 20:34:21.070455] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1171: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:46.826 { 00:08:46.826 "nqn": "nqn.2016-06.io.spdk:cnode1171", 00:08:46.826 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:46.826 "method": "nvmf_create_subsystem", 00:08:46.826 "req_id": 1 00:08:46.826 } 00:08:46.826 Got JSON-RPC error response 00:08:46.826 response: 00:08:46.826 { 00:08:46.826 "code": -32602, 00:08:46.826 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:46.826 }' 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:46.826 { 00:08:46.826 "nqn": "nqn.2016-06.io.spdk:cnode1171", 00:08:46.826 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:46.826 "method": "nvmf_create_subsystem", 00:08:46.826 "req_id": 1 00:08:46.826 } 00:08:46.826 Got JSON-RPC error response 00:08:46.826 response: 00:08:46.826 { 00:08:46.826 "code": -32602, 00:08:46.826 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:46.826 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21416 00:08:46.826 [2024-07-15 20:34:21.263092] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21416: invalid model number 'SPDK_Controller' 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:46.826 { 00:08:46.826 "nqn": "nqn.2016-06.io.spdk:cnode21416", 00:08:46.826 "model_number": "SPDK_Controller\u001f", 00:08:46.826 "method": "nvmf_create_subsystem", 00:08:46.826 "req_id": 1 00:08:46.826 } 00:08:46.826 Got JSON-RPC error response 00:08:46.826 response: 00:08:46.826 { 00:08:46.826 "code": -32602, 00:08:46.826 "message": "Invalid MN SPDK_Controller\u001f" 00:08:46.826 }' 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:46.826 { 00:08:46.826 "nqn": "nqn.2016-06.io.spdk:cnode21416", 00:08:46.826 "model_number": "SPDK_Controller\u001f", 00:08:46.826 "method": "nvmf_create_subsystem", 00:08:46.826 "req_id": 1 00:08:46.826 } 00:08:46.826 Got JSON-RPC error response 00:08:46.826 response: 00:08:46.826 { 00:08:46.826 "code": -32602, 00:08:46.826 "message": "Invalid MN SPDK_Controller\u001f" 00:08:46.826 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:46.826 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.085 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ')u{c@nj3kwH?t7eQ^M++$' 00:08:47.086 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ')u{c@nj3kwH?t7eQ^M++$' nqn.2016-06.io.spdk:cnode30588 00:08:47.346 [2024-07-15 20:34:21.580175] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30588: invalid serial number ')u{c@nj3kwH?t7eQ^M++$' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:47.346 { 00:08:47.346 "nqn": "nqn.2016-06.io.spdk:cnode30588", 00:08:47.346 "serial_number": ")u{c@nj3kwH?t7eQ^M++$", 00:08:47.346 "method": "nvmf_create_subsystem", 00:08:47.346 "req_id": 1 00:08:47.346 } 00:08:47.346 Got JSON-RPC error response 00:08:47.346 response: 00:08:47.346 { 00:08:47.346 "code": -32602, 00:08:47.346 "message": "Invalid SN )u{c@nj3kwH?t7eQ^M++$" 00:08:47.346 }' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:47.346 { 00:08:47.346 "nqn": "nqn.2016-06.io.spdk:cnode30588", 00:08:47.346 "serial_number": ")u{c@nj3kwH?t7eQ^M++$", 00:08:47.346 "method": "nvmf_create_subsystem", 00:08:47.346 "req_id": 1 00:08:47.346 } 00:08:47.346 Got JSON-RPC error response 00:08:47.346 response: 00:08:47.346 { 00:08:47.346 "code": -32602, 00:08:47.346 "message": "Invalid SN )u{c@nj3kwH?t7eQ^M++$" 00:08:47.346 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:47.346 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.347 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'zL7d}KT-r&D:oe[mjD_hQ9fCIm?_Aj, q9@GQF3Jv' 00:08:47.606 20:34:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'zL7d}KT-r&D:oe[mjD_hQ9fCIm?_Aj, q9@GQF3Jv' nqn.2016-06.io.spdk:cnode21018 00:08:47.606 [2024-07-15 20:34:22.033688] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21018: invalid model number 'zL7d}KT-r&D:oe[mjD_hQ9fCIm?_Aj, q9@GQF3Jv' 00:08:47.606 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:47.606 { 00:08:47.606 "nqn": "nqn.2016-06.io.spdk:cnode21018", 00:08:47.606 "model_number": "zL7d}KT-r&D:oe[mjD_hQ9fCIm?_Aj, q9@GQF3Jv", 00:08:47.606 "method": "nvmf_create_subsystem", 00:08:47.606 "req_id": 1 00:08:47.606 } 00:08:47.606 Got JSON-RPC error response 00:08:47.606 response: 00:08:47.606 { 00:08:47.606 "code": -32602, 00:08:47.606 "message": "Invalid MN zL7d}KT-r&D:oe[mjD_hQ9fCIm?_Aj, q9@GQF3Jv" 00:08:47.606 }' 00:08:47.606 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:47.606 { 00:08:47.606 "nqn": "nqn.2016-06.io.spdk:cnode21018", 00:08:47.606 "model_number": "zL7d}KT-r&D:oe[mjD_hQ9fCIm?_Aj, q9@GQF3Jv", 00:08:47.606 "method": "nvmf_create_subsystem", 00:08:47.606 "req_id": 1 00:08:47.606 } 00:08:47.606 Got JSON-RPC error response 00:08:47.606 response: 00:08:47.606 { 00:08:47.606 "code": -32602, 00:08:47.606 "message": "Invalid MN zL7d}KT-r&D:oe[mjD_hQ9fCIm?_Aj, q9@GQF3Jv" 00:08:47.606 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:47.606 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:47.865 [2024-07-15 20:34:22.226421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.865 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:48.124 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:48.124 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:48.124 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:48.124 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:48.124 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:48.124 [2024-07-15 20:34:22.599669] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:48.383 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:48.383 { 00:08:48.383 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:48.383 "listen_address": { 00:08:48.383 "trtype": "tcp", 00:08:48.383 "traddr": "", 00:08:48.383 "trsvcid": "4421" 00:08:48.383 }, 00:08:48.383 "method": "nvmf_subsystem_remove_listener", 00:08:48.383 "req_id": 1 00:08:48.383 } 00:08:48.383 Got JSON-RPC error response 00:08:48.383 response: 00:08:48.383 { 00:08:48.383 "code": -32602, 00:08:48.383 "message": "Invalid parameters" 00:08:48.383 }' 00:08:48.383 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:48.383 { 00:08:48.383 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:48.383 "listen_address": { 00:08:48.383 "trtype": "tcp", 00:08:48.383 "traddr": "", 00:08:48.383 "trsvcid": "4421" 00:08:48.383 }, 00:08:48.383 "method": "nvmf_subsystem_remove_listener", 00:08:48.383 "req_id": 1 00:08:48.383 } 00:08:48.383 Got JSON-RPC error response 00:08:48.383 response: 00:08:48.383 { 00:08:48.383 "code": -32602, 00:08:48.383 "message": "Invalid parameters" 00:08:48.383 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:48.383 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17006 -i 0 00:08:48.383 [2024-07-15 20:34:22.780209] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17006: invalid cntlid range [0-65519] 00:08:48.383 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:48.383 { 00:08:48.383 "nqn": "nqn.2016-06.io.spdk:cnode17006", 00:08:48.383 "min_cntlid": 0, 00:08:48.383 "method": "nvmf_create_subsystem", 00:08:48.383 "req_id": 1 00:08:48.383 } 00:08:48.383 Got JSON-RPC error response 00:08:48.383 response: 00:08:48.383 { 00:08:48.383 "code": -32602, 00:08:48.383 "message": "Invalid cntlid range [0-65519]" 00:08:48.383 }' 00:08:48.383 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:48.383 { 00:08:48.383 "nqn": "nqn.2016-06.io.spdk:cnode17006", 00:08:48.383 "min_cntlid": 0, 00:08:48.383 "method": "nvmf_create_subsystem", 00:08:48.383 "req_id": 1 00:08:48.383 } 00:08:48.383 Got JSON-RPC error response 00:08:48.383 response: 00:08:48.383 { 00:08:48.383 "code": -32602, 00:08:48.383 "message": "Invalid cntlid range [0-65519]" 00:08:48.383 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:48.383 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode845 -i 65520 00:08:48.642 [2024-07-15 20:34:22.965068] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode845: invalid cntlid range [65520-65519] 00:08:48.642 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:48.642 { 00:08:48.642 "nqn": "nqn.2016-06.io.spdk:cnode845", 00:08:48.642 "min_cntlid": 65520, 00:08:48.642 "method": "nvmf_create_subsystem", 00:08:48.642 "req_id": 1 00:08:48.642 } 00:08:48.642 Got JSON-RPC error response 00:08:48.642 response: 00:08:48.642 { 00:08:48.642 "code": -32602, 00:08:48.642 "message": "Invalid cntlid range [65520-65519]" 00:08:48.642 }' 00:08:48.642 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:48.642 { 00:08:48.642 "nqn": "nqn.2016-06.io.spdk:cnode845", 00:08:48.642 "min_cntlid": 65520, 00:08:48.642 "method": "nvmf_create_subsystem", 00:08:48.642 "req_id": 1 00:08:48.642 } 00:08:48.642 Got JSON-RPC error response 00:08:48.642 response: 00:08:48.642 { 00:08:48.642 "code": -32602, 00:08:48.642 "message": "Invalid cntlid range [65520-65519]" 00:08:48.642 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:48.642 20:34:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20636 -I 0 00:08:48.900 [2024-07-15 20:34:23.145718] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20636: invalid cntlid range [1-0] 00:08:48.900 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:48.900 { 00:08:48.900 "nqn": "nqn.2016-06.io.spdk:cnode20636", 00:08:48.900 "max_cntlid": 0, 00:08:48.900 "method": "nvmf_create_subsystem", 00:08:48.900 "req_id": 1 00:08:48.900 } 00:08:48.900 Got JSON-RPC error response 00:08:48.900 response: 00:08:48.900 { 00:08:48.900 "code": -32602, 00:08:48.900 "message": "Invalid cntlid range [1-0]" 00:08:48.900 }' 00:08:48.900 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:48.900 { 00:08:48.900 "nqn": "nqn.2016-06.io.spdk:cnode20636", 00:08:48.900 "max_cntlid": 0, 00:08:48.900 "method": "nvmf_create_subsystem", 00:08:48.900 "req_id": 1 00:08:48.900 } 00:08:48.900 Got JSON-RPC error response 00:08:48.900 response: 00:08:48.900 { 00:08:48.900 "code": -32602, 00:08:48.900 "message": "Invalid cntlid range [1-0]" 00:08:48.900 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:48.900 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20570 -I 65520 00:08:48.900 [2024-07-15 20:34:23.342362] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20570: invalid cntlid range [1-65520] 00:08:48.901 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:48.901 { 00:08:48.901 "nqn": "nqn.2016-06.io.spdk:cnode20570", 00:08:48.901 "max_cntlid": 65520, 00:08:48.901 "method": "nvmf_create_subsystem", 00:08:48.901 "req_id": 1 00:08:48.901 } 00:08:48.901 Got JSON-RPC error response 00:08:48.901 response: 00:08:48.901 { 00:08:48.901 "code": -32602, 00:08:48.901 "message": "Invalid cntlid range [1-65520]" 00:08:48.901 }' 00:08:48.901 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:48.901 { 00:08:48.901 "nqn": "nqn.2016-06.io.spdk:cnode20570", 00:08:48.901 "max_cntlid": 65520, 00:08:48.901 "method": "nvmf_create_subsystem", 00:08:48.901 "req_id": 1 00:08:48.901 } 00:08:48.901 Got JSON-RPC error response 00:08:48.901 response: 00:08:48.901 { 00:08:48.901 "code": -32602, 00:08:48.901 "message": "Invalid cntlid range [1-65520]" 00:08:48.901 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:48.901 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13964 -i 6 -I 5 00:08:49.159 [2024-07-15 20:34:23.535036] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13964: invalid cntlid range [6-5] 00:08:49.159 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:49.159 { 00:08:49.159 "nqn": "nqn.2016-06.io.spdk:cnode13964", 00:08:49.159 "min_cntlid": 6, 00:08:49.159 "max_cntlid": 5, 00:08:49.159 "method": "nvmf_create_subsystem", 00:08:49.159 "req_id": 1 00:08:49.159 } 00:08:49.159 Got JSON-RPC error response 00:08:49.159 response: 00:08:49.159 { 00:08:49.159 "code": -32602, 00:08:49.159 "message": "Invalid cntlid range [6-5]" 00:08:49.159 }' 00:08:49.159 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:49.159 { 00:08:49.159 "nqn": "nqn.2016-06.io.spdk:cnode13964", 00:08:49.159 "min_cntlid": 6, 00:08:49.159 "max_cntlid": 5, 00:08:49.159 "method": "nvmf_create_subsystem", 00:08:49.159 "req_id": 1 00:08:49.159 } 00:08:49.159 Got JSON-RPC error response 00:08:49.159 response: 00:08:49.159 { 00:08:49.159 "code": -32602, 00:08:49.159 "message": "Invalid cntlid range [6-5]" 00:08:49.159 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:49.159 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:49.417 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:49.417 { 00:08:49.417 "name": "foobar", 00:08:49.417 "method": "nvmf_delete_target", 00:08:49.417 "req_id": 1 00:08:49.417 } 00:08:49.417 Got JSON-RPC error response 00:08:49.417 response: 00:08:49.417 { 00:08:49.417 "code": -32602, 00:08:49.417 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:49.417 }' 00:08:49.417 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:49.417 { 00:08:49.417 "name": "foobar", 00:08:49.417 "method": "nvmf_delete_target", 00:08:49.417 "req_id": 1 00:08:49.417 } 00:08:49.417 Got JSON-RPC error response 00:08:49.417 response: 00:08:49.417 { 00:08:49.417 "code": -32602, 00:08:49.417 "message": "The specified target doesn't exist, cannot delete it." 00:08:49.417 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:49.417 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:49.417 20:34:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:49.417 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.418 rmmod nvme_tcp 00:08:49.418 rmmod nvme_fabrics 00:08:49.418 rmmod nvme_keyring 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2569099 ']' 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2569099 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2569099 ']' 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2569099 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2569099 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2569099' 00:08:49.418 killing process with pid 2569099 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2569099 00:08:49.418 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2569099 00:08:49.676 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.676 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.676 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.676 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.676 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.676 20:34:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.676 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.676 20:34:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.581 20:34:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:51.581 00:08:51.581 real 0m11.006s 00:08:51.581 user 0m19.183s 00:08:51.581 sys 0m4.453s 00:08:51.581 20:34:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.581 20:34:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:51.581 ************************************ 00:08:51.581 END TEST nvmf_invalid 00:08:51.581 ************************************ 00:08:51.581 20:34:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:51.581 20:34:26 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:51.581 20:34:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:51.581 20:34:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.581 20:34:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:51.840 ************************************ 00:08:51.840 START TEST nvmf_abort 00:08:51.840 ************************************ 00:08:51.840 20:34:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:51.840 * Looking for test storage... 00:08:51.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.840 20:34:26 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.840 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:51.840 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.840 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.840 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.840 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.841 20:34:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:57.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:57.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:57.116 Found net devices under 0000:86:00.0: cvl_0_0 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:57.116 Found net devices under 0000:86:00.1: cvl_0_1 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:08:57.116 00:08:57.116 --- 10.0.0.2 ping statistics --- 00:08:57.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.116 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:08:57.116 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:08:57.116 00:08:57.116 --- 10.0.0.1 ping statistics --- 00:08:57.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.116 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2573258 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2573258 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2573258 ']' 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.117 20:34:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.117 [2024-07-15 20:34:31.492078] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:08:57.117 [2024-07-15 20:34:31.492123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.117 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.117 [2024-07-15 20:34:31.550213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.377 [2024-07-15 20:34:31.630832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.377 [2024-07-15 20:34:31.630867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.377 [2024-07-15 20:34:31.630874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.377 [2024-07-15 20:34:31.630881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.377 [2024-07-15 20:34:31.630886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.377 [2024-07-15 20:34:31.630924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.377 [2024-07-15 20:34:31.631010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.377 [2024-07-15 20:34:31.631011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 [2024-07-15 20:34:32.335876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 Malloc0 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 Delay0 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 [2024-07-15 20:34:32.412021] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.946 20:34:32 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:58.205 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.205 [2024-07-15 20:34:32.509031] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:00.107 Initializing NVMe Controllers 00:09:00.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:00.107 controller IO queue size 128 less than required 00:09:00.107 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:00.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:00.107 Initialization complete. Launching workers. 00:09:00.107 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 40317 00:09:00.107 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 40378, failed to submit 62 00:09:00.107 success 40321, unsuccess 57, failed 0 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.107 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.107 rmmod nvme_tcp 00:09:00.107 rmmod nvme_fabrics 00:09:00.365 rmmod nvme_keyring 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2573258 ']' 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2573258 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2573258 ']' 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2573258 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2573258 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2573258' 00:09:00.365 killing process with pid 2573258 00:09:00.365 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2573258 00:09:00.366 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2573258 00:09:00.623 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.623 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.623 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.623 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.623 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.623 20:34:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.623 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.623 20:34:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.552 20:34:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.552 00:09:02.552 real 0m10.860s 00:09:02.552 user 0m12.892s 00:09:02.552 sys 0m4.807s 00:09:02.552 20:34:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.552 20:34:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:02.552 ************************************ 00:09:02.552 END TEST nvmf_abort 00:09:02.552 ************************************ 00:09:02.552 20:34:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.552 20:34:36 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:02.552 20:34:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.552 20:34:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.552 20:34:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.552 ************************************ 00:09:02.552 START TEST nvmf_ns_hotplug_stress 00:09:02.552 ************************************ 00:09:02.552 20:34:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:02.811 * Looking for test storage... 00:09:02.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.812 20:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.093 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:08.094 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:08.094 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:08.094 Found net devices under 0000:86:00.0: cvl_0_0 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:08.094 Found net devices under 0000:86:00.1: cvl_0_1 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:09:08.094 00:09:08.094 --- 10.0.0.2 ping statistics --- 00:09:08.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.094 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:09:08.094 00:09:08.094 --- 10.0.0.1 ping statistics --- 00:09:08.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.094 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2577263 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2577263 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2577263 ']' 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.094 20:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.094 [2024-07-15 20:34:42.477809] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:08.094 [2024-07-15 20:34:42.477857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.094 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.094 [2024-07-15 20:34:42.536658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:08.354 [2024-07-15 20:34:42.616783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.354 [2024-07-15 20:34:42.616818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.354 [2024-07-15 20:34:42.616825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.354 [2024-07-15 20:34:42.616831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.354 [2024-07-15 20:34:42.616836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.354 [2024-07-15 20:34:42.616938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.354 [2024-07-15 20:34:42.617042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.354 [2024-07-15 20:34:42.617043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.923 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.923 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:08.923 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.923 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:08.923 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.923 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.923 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:08.923 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.181 [2024-07-15 20:34:43.486266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.181 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.440 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.440 [2024-07-15 20:34:43.851562] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.440 20:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.699 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:09.958 Malloc0 00:09:09.958 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:09.958 Delay0 00:09:10.217 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.217 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:10.475 NULL1 00:09:10.475 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:10.733 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:10.733 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2577748 00:09:10.733 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:10.733 20:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.733 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.733 20:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.992 20:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:10.992 20:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:11.251 true 00:09:11.251 20:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:11.251 20:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.510 20:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.510 20:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:11.510 20:34:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:11.769 true 00:09:11.769 20:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:11.769 20:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.028 20:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.287 20:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:12.287 20:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:12.287 true 00:09:12.287 20:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:12.287 20:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.546 20:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.805 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:12.805 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:13.064 true 00:09:13.064 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:13.064 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.064 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.323 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:13.323 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:13.582 true 00:09:13.582 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:13.582 20:34:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.841 20:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.841 20:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:13.841 20:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:14.100 true 00:09:14.100 20:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:14.100 20:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.359 20:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.618 20:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:14.618 20:34:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:14.618 true 00:09:14.618 20:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:14.618 20:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.877 20:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.136 20:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:15.136 20:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:15.395 true 00:09:15.395 20:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:15.395 20:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.395 20:34:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.653 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:15.653 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:15.913 true 00:09:15.913 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:15.913 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.171 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.171 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:16.171 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:16.430 true 00:09:16.430 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:16.430 20:34:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.689 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.949 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:16.949 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:16.949 true 00:09:16.949 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:16.949 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.208 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.467 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:17.468 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:17.726 true 00:09:17.727 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:17.727 20:34:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.727 20:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.985 20:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:17.985 20:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:18.243 true 00:09:18.243 20:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:18.243 20:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.502 20:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.502 20:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:18.502 20:34:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:18.767 true 00:09:18.767 20:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:18.767 20:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.103 20:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.103 20:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:19.103 20:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:19.361 true 00:09:19.361 20:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:19.361 20:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.620 20:34:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.878 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:19.878 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:19.878 true 00:09:19.878 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:19.878 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.137 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.397 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:20.397 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:20.397 true 00:09:20.656 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:20.656 20:34:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.656 20:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.913 20:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:20.913 20:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:21.170 true 00:09:21.170 20:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:21.170 20:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.427 20:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.427 20:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:21.427 20:34:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:21.685 true 00:09:21.685 20:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:21.685 20:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.942 20:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.200 20:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:22.200 20:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:22.200 true 00:09:22.200 20:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:22.200 20:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.457 20:34:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.715 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:22.715 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:22.715 true 00:09:22.715 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:22.715 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.973 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.244 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:23.244 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:23.501 true 00:09:23.501 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:23.501 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.759 20:34:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.759 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:23.759 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:24.017 true 00:09:24.017 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:24.017 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.275 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.533 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:24.533 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:24.533 true 00:09:24.533 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:24.533 20:34:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.791 20:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.050 20:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:25.050 20:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:25.336 true 00:09:25.336 20:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:25.336 20:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.336 20:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.595 20:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:25.595 20:34:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:25.853 true 00:09:25.853 20:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:25.853 20:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.112 20:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.112 20:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:26.112 20:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:26.370 true 00:09:26.370 20:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:26.370 20:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.629 20:35:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.888 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:26.888 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:26.888 true 00:09:26.888 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:26.888 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.147 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.405 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:27.405 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:27.664 true 00:09:27.664 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:27.664 20:35:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.923 20:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.923 20:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:27.923 20:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:28.182 true 00:09:28.182 20:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:28.182 20:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.441 20:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.700 20:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:28.700 20:35:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:28.700 true 00:09:28.700 20:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:28.700 20:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.009 20:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.267 20:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:29.267 20:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:29.267 true 00:09:29.527 20:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:29.527 20:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.527 20:35:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.787 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:29.787 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:30.045 true 00:09:30.045 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:30.045 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.304 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.304 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:30.304 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:30.562 true 00:09:30.562 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:30.562 20:35:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.821 20:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.081 20:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:31.081 20:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:31.081 true 00:09:31.081 20:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:31.081 20:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.340 20:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.600 20:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:31.600 20:35:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:31.859 true 00:09:31.859 20:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:31.859 20:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.859 20:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.118 20:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:32.118 20:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:32.378 true 00:09:32.378 20:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:32.378 20:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.636 20:35:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.931 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:32.931 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:32.931 true 00:09:32.931 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:32.931 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.189 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.448 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:33.448 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:33.706 true 00:09:33.706 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:33.706 20:35:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.706 20:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.965 20:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:33.965 20:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:34.224 true 00:09:34.224 20:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:34.224 20:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.483 20:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.483 20:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:34.483 20:35:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:34.741 true 00:09:34.741 20:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:34.741 20:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.001 20:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.260 20:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:35.260 20:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:35.519 true 00:09:35.519 20:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:35.519 20:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.519 20:35:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.778 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:35.778 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:36.037 true 00:09:36.037 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:36.037 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.296 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.555 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:09:36.555 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:36.555 true 00:09:36.555 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:36.555 20:35:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.813 20:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.072 20:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:09:37.072 20:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:37.072 true 00:09:37.330 20:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:37.330 20:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.330 20:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.589 20:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:09:37.589 20:35:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:37.848 true 00:09:37.848 20:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:37.848 20:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.107 20:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.107 20:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:38.107 20:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:38.366 true 00:09:38.366 20:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:38.366 20:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.624 20:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.883 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:38.884 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:38.884 true 00:09:38.884 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:38.884 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.142 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.400 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:39.400 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:39.658 true 00:09:39.658 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:39.658 20:35:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.916 20:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.916 20:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:09:39.916 20:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:40.173 true 00:09:40.173 20:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:40.173 20:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.430 20:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.688 20:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:09:40.688 20:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:40.975 true 00:09:40.975 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:40.975 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.975 Initializing NVMe Controllers 00:09:40.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:40.975 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:09:40.975 Controller IO queue size 128, less than required. 00:09:40.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:40.975 WARNING: Some requested NVMe devices were skipped 00:09:40.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:40.975 Initialization complete. Launching workers. 00:09:40.975 ======================================================== 00:09:40.975 Latency(us) 00:09:40.975 Device Information : IOPS MiB/s Average min max 00:09:40.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26898.30 13.13 4758.78 2121.64 44558.22 00:09:40.975 ======================================================== 00:09:40.975 Total : 26898.30 13.13 4758.78 2121.64 44558.22 00:09:40.975 00:09:40.975 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.233 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:09:41.233 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:09:41.490 true 00:09:41.490 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2577748 00:09:41.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2577748) - No such process 00:09:41.490 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2577748 00:09:41.490 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.490 20:35:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:41.746 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:41.746 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:41.746 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:41.746 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:41.746 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:42.004 null0 00:09:42.004 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:42.004 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:42.004 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:42.004 null1 00:09:42.263 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:42.263 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:42.263 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:42.263 null2 00:09:42.263 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:42.263 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:42.263 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:42.521 null3 00:09:42.521 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:42.521 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:42.521 20:35:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:42.780 null4 00:09:42.780 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:42.780 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:42.780 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:42.780 null5 00:09:42.780 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:42.780 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:42.780 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:43.039 null6 00:09:43.039 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:43.039 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:43.039 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:43.299 null7 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2583357 2583359 2583362 2583364 2583368 2583371 2583372 2583375 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.299 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.559 20:35:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:43.559 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.559 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.559 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:43.819 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:43.819 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.819 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:43.819 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:43.819 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:43.819 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:43.819 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:43.819 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:44.078 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.338 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:44.597 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:44.597 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:44.597 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.597 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:44.597 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:44.597 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.597 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:44.597 20:35:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.856 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:44.857 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:44.857 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.857 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.857 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:44.857 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:44.857 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:44.857 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:45.117 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:45.376 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:45.376 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.376 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:45.376 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:45.376 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:45.376 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:45.376 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:45.377 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.636 20:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:45.636 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:45.636 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.636 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:45.636 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:45.636 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:45.636 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:45.636 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:45.636 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.896 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:46.154 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.154 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.155 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:46.444 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.444 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.444 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:46.444 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.444 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:46.445 20:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.734 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.735 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.994 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.994 rmmod nvme_tcp 00:09:46.994 rmmod nvme_fabrics 00:09:47.253 rmmod nvme_keyring 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2577263 ']' 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2577263 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2577263 ']' 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2577263 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2577263 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2577263' 00:09:47.253 killing process with pid 2577263 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2577263 00:09:47.253 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2577263 00:09:47.512 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.512 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.512 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.512 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.512 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.512 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.512 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.512 20:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.420 20:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.420 00:09:49.420 real 0m46.818s 00:09:49.420 user 3m19.717s 00:09:49.420 sys 0m16.549s 00:09:49.420 20:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.420 20:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.420 ************************************ 00:09:49.420 END TEST nvmf_ns_hotplug_stress 00:09:49.420 ************************************ 00:09:49.420 20:35:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:49.420 20:35:23 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:49.420 20:35:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.420 20:35:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.420 20:35:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.420 ************************************ 00:09:49.420 START TEST nvmf_connect_stress 00:09:49.420 ************************************ 00:09:49.420 20:35:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:49.679 * Looking for test storage... 00:09:49.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.679 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.680 20:35:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.680 20:35:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:54.959 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:54.959 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:54.960 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:54.960 Found net devices under 0000:86:00.0: cvl_0_0 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:54.960 Found net devices under 0000:86:00.1: cvl_0_1 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:09:54.960 00:09:54.960 --- 10.0.0.2 ping statistics --- 00:09:54.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.960 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:09:54.960 00:09:54.960 --- 10.0.0.1 ping statistics --- 00:09:54.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.960 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2587559 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2587559 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2587559 ']' 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:54.960 20:35:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:54.960 [2024-07-15 20:35:28.990533] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:09:54.960 [2024-07-15 20:35:28.990574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.960 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.960 [2024-07-15 20:35:29.046095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:54.960 [2024-07-15 20:35:29.125445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.960 [2024-07-15 20:35:29.125478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.960 [2024-07-15 20:35:29.125484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.960 [2024-07-15 20:35:29.125490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.960 [2024-07-15 20:35:29.125495] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.960 [2024-07-15 20:35:29.125536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.960 [2024-07-15 20:35:29.125621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.960 [2024-07-15 20:35:29.125623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.528 [2024-07-15 20:35:29.849981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.528 [2024-07-15 20:35:29.879333] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.528 NULL1 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2587740 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.528 20:35:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.095 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.095 20:35:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:56.095 20:35:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.095 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.095 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.354 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.354 20:35:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:56.354 20:35:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.354 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.354 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.612 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.612 20:35:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:56.612 20:35:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.612 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.612 20:35:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.871 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.871 20:35:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:56.871 20:35:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.871 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.871 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.130 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.130 20:35:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:57.130 20:35:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.130 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.130 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.699 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.699 20:35:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:57.699 20:35:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.699 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.699 20:35:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.958 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.958 20:35:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:57.958 20:35:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.958 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.958 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.217 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.217 20:35:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:58.217 20:35:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.217 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.217 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.476 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.476 20:35:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:58.476 20:35:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.476 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.476 20:35:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.042 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.042 20:35:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:59.042 20:35:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.042 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.042 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.301 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.301 20:35:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:59.301 20:35:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.301 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.301 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.559 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.559 20:35:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:59.559 20:35:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.559 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.559 20:35:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.817 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.817 20:35:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:09:59.817 20:35:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.817 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.817 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.076 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.076 20:35:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:00.076 20:35:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:00.076 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.076 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.653 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.653 20:35:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:00.653 20:35:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:00.653 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.653 20:35:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.912 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.912 20:35:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:00.912 20:35:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:00.912 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.912 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.170 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.170 20:35:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:01.170 20:35:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.170 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.170 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.429 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.429 20:35:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:01.429 20:35:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.429 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.429 20:35:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.687 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.687 20:35:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:01.687 20:35:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.687 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.687 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.254 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.254 20:35:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:02.254 20:35:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.254 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.254 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.512 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.512 20:35:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:02.512 20:35:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.512 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.512 20:35:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.770 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.770 20:35:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:02.770 20:35:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.770 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.770 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.029 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.029 20:35:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:03.029 20:35:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.029 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.029 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.598 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.598 20:35:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:03.598 20:35:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.598 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.598 20:35:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.857 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.857 20:35:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:03.857 20:35:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.857 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.857 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.116 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.116 20:35:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:04.116 20:35:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.116 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.116 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.407 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.407 20:35:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:04.407 20:35:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.407 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.407 20:35:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.695 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.695 20:35:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:04.695 20:35:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.695 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.695 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.955 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.955 20:35:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:04.955 20:35:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.955 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.955 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.524 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.524 20:35:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:05.524 20:35:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.524 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.524 20:35:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.784 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.784 20:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:05.784 20:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.784 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.784 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.784 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2587740 00:10:06.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2587740) - No such process 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2587740 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.044 rmmod nvme_tcp 00:10:06.044 rmmod nvme_fabrics 00:10:06.044 rmmod nvme_keyring 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2587559 ']' 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2587559 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2587559 ']' 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2587559 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2587559 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2587559' 00:10:06.044 killing process with pid 2587559 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2587559 00:10:06.044 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2587559 00:10:06.303 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.303 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.303 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.303 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.303 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.303 20:35:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.303 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.303 20:35:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.841 20:35:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.841 00:10:08.841 real 0m18.855s 00:10:08.841 user 0m41.994s 00:10:08.841 sys 0m7.745s 00:10:08.841 20:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.841 20:35:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.841 ************************************ 00:10:08.841 END TEST nvmf_connect_stress 00:10:08.841 ************************************ 00:10:08.841 20:35:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:08.841 20:35:42 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:08.841 20:35:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.841 20:35:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.841 20:35:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.841 ************************************ 00:10:08.841 START TEST nvmf_fused_ordering 00:10:08.841 ************************************ 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:08.841 * Looking for test storage... 00:10:08.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:08.841 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.842 20:35:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:14.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:14.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:14.119 Found net devices under 0000:86:00.0: cvl_0_0 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.119 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:14.120 Found net devices under 0000:86:00.1: cvl_0_1 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:10:14.120 00:10:14.120 --- 10.0.0.2 ping statistics --- 00:10:14.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.120 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:10:14.120 00:10:14.120 --- 10.0.0.1 ping statistics --- 00:10:14.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.120 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2592969 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2592969 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2592969 ']' 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.120 20:35:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:14.120 [2024-07-15 20:35:48.382098] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:14.120 [2024-07-15 20:35:48.382145] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.120 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.120 [2024-07-15 20:35:48.440867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.120 [2024-07-15 20:35:48.519671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.120 [2024-07-15 20:35:48.519702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.120 [2024-07-15 20:35:48.519710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.120 [2024-07-15 20:35:48.519716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.120 [2024-07-15 20:35:48.519720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.120 [2024-07-15 20:35:48.519736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:15.059 [2024-07-15 20:35:49.231152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:15.059 [2024-07-15 20:35:49.251283] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:15.059 NULL1 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.059 20:35:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:15.059 [2024-07-15 20:35:49.306622] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:15.059 [2024-07-15 20:35:49.306666] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593174 ] 00:10:15.059 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.318 Attached to nqn.2016-06.io.spdk:cnode1 00:10:15.318 Namespace ID: 1 size: 1GB 00:10:15.318 fused_ordering(0) 00:10:15.318 fused_ordering(1) 00:10:15.318 fused_ordering(2) 00:10:15.318 fused_ordering(3) 00:10:15.318 fused_ordering(4) 00:10:15.318 fused_ordering(5) 00:10:15.318 fused_ordering(6) 00:10:15.318 fused_ordering(7) 00:10:15.318 fused_ordering(8) 00:10:15.318 fused_ordering(9) 00:10:15.318 fused_ordering(10) 00:10:15.318 fused_ordering(11) 00:10:15.318 fused_ordering(12) 00:10:15.318 fused_ordering(13) 00:10:15.318 fused_ordering(14) 00:10:15.318 fused_ordering(15) 00:10:15.318 fused_ordering(16) 00:10:15.318 fused_ordering(17) 00:10:15.318 fused_ordering(18) 00:10:15.318 fused_ordering(19) 00:10:15.318 fused_ordering(20) 00:10:15.318 fused_ordering(21) 00:10:15.318 fused_ordering(22) 00:10:15.318 fused_ordering(23) 00:10:15.318 fused_ordering(24) 00:10:15.318 fused_ordering(25) 00:10:15.318 fused_ordering(26) 00:10:15.318 fused_ordering(27) 00:10:15.318 fused_ordering(28) 00:10:15.318 fused_ordering(29) 00:10:15.318 fused_ordering(30) 00:10:15.318 fused_ordering(31) 00:10:15.318 fused_ordering(32) 00:10:15.318 fused_ordering(33) 00:10:15.318 fused_ordering(34) 00:10:15.318 fused_ordering(35) 00:10:15.318 fused_ordering(36) 00:10:15.318 fused_ordering(37) 00:10:15.318 fused_ordering(38) 00:10:15.318 fused_ordering(39) 00:10:15.318 fused_ordering(40) 00:10:15.318 fused_ordering(41) 00:10:15.318 fused_ordering(42) 00:10:15.318 fused_ordering(43) 00:10:15.318 fused_ordering(44) 00:10:15.318 fused_ordering(45) 00:10:15.318 fused_ordering(46) 00:10:15.318 fused_ordering(47) 00:10:15.318 fused_ordering(48) 00:10:15.318 fused_ordering(49) 00:10:15.318 fused_ordering(50) 00:10:15.318 fused_ordering(51) 00:10:15.318 fused_ordering(52) 00:10:15.318 fused_ordering(53) 00:10:15.318 fused_ordering(54) 00:10:15.318 fused_ordering(55) 00:10:15.318 fused_ordering(56) 00:10:15.318 fused_ordering(57) 00:10:15.318 fused_ordering(58) 00:10:15.318 fused_ordering(59) 00:10:15.318 fused_ordering(60) 00:10:15.318 fused_ordering(61) 00:10:15.318 fused_ordering(62) 00:10:15.318 fused_ordering(63) 00:10:15.318 fused_ordering(64) 00:10:15.318 fused_ordering(65) 00:10:15.318 fused_ordering(66) 00:10:15.318 fused_ordering(67) 00:10:15.318 fused_ordering(68) 00:10:15.318 fused_ordering(69) 00:10:15.318 fused_ordering(70) 00:10:15.318 fused_ordering(71) 00:10:15.318 fused_ordering(72) 00:10:15.318 fused_ordering(73) 00:10:15.318 fused_ordering(74) 00:10:15.318 fused_ordering(75) 00:10:15.318 fused_ordering(76) 00:10:15.318 fused_ordering(77) 00:10:15.318 fused_ordering(78) 00:10:15.318 fused_ordering(79) 00:10:15.318 fused_ordering(80) 00:10:15.318 fused_ordering(81) 00:10:15.318 fused_ordering(82) 00:10:15.318 fused_ordering(83) 00:10:15.318 fused_ordering(84) 00:10:15.318 fused_ordering(85) 00:10:15.318 fused_ordering(86) 00:10:15.318 fused_ordering(87) 00:10:15.318 fused_ordering(88) 00:10:15.318 fused_ordering(89) 00:10:15.318 fused_ordering(90) 00:10:15.318 fused_ordering(91) 00:10:15.318 fused_ordering(92) 00:10:15.318 fused_ordering(93) 00:10:15.318 fused_ordering(94) 00:10:15.318 fused_ordering(95) 00:10:15.318 fused_ordering(96) 00:10:15.318 fused_ordering(97) 00:10:15.318 fused_ordering(98) 00:10:15.318 fused_ordering(99) 00:10:15.318 fused_ordering(100) 00:10:15.318 fused_ordering(101) 00:10:15.318 fused_ordering(102) 00:10:15.318 fused_ordering(103) 00:10:15.318 fused_ordering(104) 00:10:15.318 fused_ordering(105) 00:10:15.318 fused_ordering(106) 00:10:15.318 fused_ordering(107) 00:10:15.318 fused_ordering(108) 00:10:15.318 fused_ordering(109) 00:10:15.318 fused_ordering(110) 00:10:15.318 fused_ordering(111) 00:10:15.318 fused_ordering(112) 00:10:15.318 fused_ordering(113) 00:10:15.318 fused_ordering(114) 00:10:15.318 fused_ordering(115) 00:10:15.318 fused_ordering(116) 00:10:15.318 fused_ordering(117) 00:10:15.318 fused_ordering(118) 00:10:15.318 fused_ordering(119) 00:10:15.318 fused_ordering(120) 00:10:15.318 fused_ordering(121) 00:10:15.318 fused_ordering(122) 00:10:15.318 fused_ordering(123) 00:10:15.318 fused_ordering(124) 00:10:15.318 fused_ordering(125) 00:10:15.318 fused_ordering(126) 00:10:15.318 fused_ordering(127) 00:10:15.318 fused_ordering(128) 00:10:15.318 fused_ordering(129) 00:10:15.318 fused_ordering(130) 00:10:15.318 fused_ordering(131) 00:10:15.318 fused_ordering(132) 00:10:15.318 fused_ordering(133) 00:10:15.318 fused_ordering(134) 00:10:15.318 fused_ordering(135) 00:10:15.319 fused_ordering(136) 00:10:15.319 fused_ordering(137) 00:10:15.319 fused_ordering(138) 00:10:15.319 fused_ordering(139) 00:10:15.319 fused_ordering(140) 00:10:15.319 fused_ordering(141) 00:10:15.319 fused_ordering(142) 00:10:15.319 fused_ordering(143) 00:10:15.319 fused_ordering(144) 00:10:15.319 fused_ordering(145) 00:10:15.319 fused_ordering(146) 00:10:15.319 fused_ordering(147) 00:10:15.319 fused_ordering(148) 00:10:15.319 fused_ordering(149) 00:10:15.319 fused_ordering(150) 00:10:15.319 fused_ordering(151) 00:10:15.319 fused_ordering(152) 00:10:15.319 fused_ordering(153) 00:10:15.319 fused_ordering(154) 00:10:15.319 fused_ordering(155) 00:10:15.319 fused_ordering(156) 00:10:15.319 fused_ordering(157) 00:10:15.319 fused_ordering(158) 00:10:15.319 fused_ordering(159) 00:10:15.319 fused_ordering(160) 00:10:15.319 fused_ordering(161) 00:10:15.319 fused_ordering(162) 00:10:15.319 fused_ordering(163) 00:10:15.319 fused_ordering(164) 00:10:15.319 fused_ordering(165) 00:10:15.319 fused_ordering(166) 00:10:15.319 fused_ordering(167) 00:10:15.319 fused_ordering(168) 00:10:15.319 fused_ordering(169) 00:10:15.319 fused_ordering(170) 00:10:15.319 fused_ordering(171) 00:10:15.319 fused_ordering(172) 00:10:15.319 fused_ordering(173) 00:10:15.319 fused_ordering(174) 00:10:15.319 fused_ordering(175) 00:10:15.319 fused_ordering(176) 00:10:15.319 fused_ordering(177) 00:10:15.319 fused_ordering(178) 00:10:15.319 fused_ordering(179) 00:10:15.319 fused_ordering(180) 00:10:15.319 fused_ordering(181) 00:10:15.319 fused_ordering(182) 00:10:15.319 fused_ordering(183) 00:10:15.319 fused_ordering(184) 00:10:15.319 fused_ordering(185) 00:10:15.319 fused_ordering(186) 00:10:15.319 fused_ordering(187) 00:10:15.319 fused_ordering(188) 00:10:15.319 fused_ordering(189) 00:10:15.319 fused_ordering(190) 00:10:15.319 fused_ordering(191) 00:10:15.319 fused_ordering(192) 00:10:15.319 fused_ordering(193) 00:10:15.319 fused_ordering(194) 00:10:15.319 fused_ordering(195) 00:10:15.319 fused_ordering(196) 00:10:15.319 fused_ordering(197) 00:10:15.319 fused_ordering(198) 00:10:15.319 fused_ordering(199) 00:10:15.319 fused_ordering(200) 00:10:15.319 fused_ordering(201) 00:10:15.319 fused_ordering(202) 00:10:15.319 fused_ordering(203) 00:10:15.319 fused_ordering(204) 00:10:15.319 fused_ordering(205) 00:10:15.578 fused_ordering(206) 00:10:15.578 fused_ordering(207) 00:10:15.578 fused_ordering(208) 00:10:15.578 fused_ordering(209) 00:10:15.578 fused_ordering(210) 00:10:15.578 fused_ordering(211) 00:10:15.578 fused_ordering(212) 00:10:15.578 fused_ordering(213) 00:10:15.578 fused_ordering(214) 00:10:15.578 fused_ordering(215) 00:10:15.578 fused_ordering(216) 00:10:15.578 fused_ordering(217) 00:10:15.578 fused_ordering(218) 00:10:15.578 fused_ordering(219) 00:10:15.578 fused_ordering(220) 00:10:15.578 fused_ordering(221) 00:10:15.578 fused_ordering(222) 00:10:15.578 fused_ordering(223) 00:10:15.578 fused_ordering(224) 00:10:15.578 fused_ordering(225) 00:10:15.578 fused_ordering(226) 00:10:15.578 fused_ordering(227) 00:10:15.578 fused_ordering(228) 00:10:15.578 fused_ordering(229) 00:10:15.578 fused_ordering(230) 00:10:15.578 fused_ordering(231) 00:10:15.578 fused_ordering(232) 00:10:15.578 fused_ordering(233) 00:10:15.578 fused_ordering(234) 00:10:15.578 fused_ordering(235) 00:10:15.578 fused_ordering(236) 00:10:15.578 fused_ordering(237) 00:10:15.578 fused_ordering(238) 00:10:15.578 fused_ordering(239) 00:10:15.578 fused_ordering(240) 00:10:15.578 fused_ordering(241) 00:10:15.578 fused_ordering(242) 00:10:15.578 fused_ordering(243) 00:10:15.578 fused_ordering(244) 00:10:15.578 fused_ordering(245) 00:10:15.578 fused_ordering(246) 00:10:15.578 fused_ordering(247) 00:10:15.578 fused_ordering(248) 00:10:15.578 fused_ordering(249) 00:10:15.578 fused_ordering(250) 00:10:15.578 fused_ordering(251) 00:10:15.578 fused_ordering(252) 00:10:15.578 fused_ordering(253) 00:10:15.578 fused_ordering(254) 00:10:15.578 fused_ordering(255) 00:10:15.578 fused_ordering(256) 00:10:15.578 fused_ordering(257) 00:10:15.578 fused_ordering(258) 00:10:15.578 fused_ordering(259) 00:10:15.578 fused_ordering(260) 00:10:15.578 fused_ordering(261) 00:10:15.578 fused_ordering(262) 00:10:15.578 fused_ordering(263) 00:10:15.578 fused_ordering(264) 00:10:15.578 fused_ordering(265) 00:10:15.578 fused_ordering(266) 00:10:15.578 fused_ordering(267) 00:10:15.578 fused_ordering(268) 00:10:15.578 fused_ordering(269) 00:10:15.578 fused_ordering(270) 00:10:15.578 fused_ordering(271) 00:10:15.578 fused_ordering(272) 00:10:15.578 fused_ordering(273) 00:10:15.578 fused_ordering(274) 00:10:15.578 fused_ordering(275) 00:10:15.578 fused_ordering(276) 00:10:15.578 fused_ordering(277) 00:10:15.578 fused_ordering(278) 00:10:15.578 fused_ordering(279) 00:10:15.578 fused_ordering(280) 00:10:15.578 fused_ordering(281) 00:10:15.578 fused_ordering(282) 00:10:15.578 fused_ordering(283) 00:10:15.578 fused_ordering(284) 00:10:15.578 fused_ordering(285) 00:10:15.578 fused_ordering(286) 00:10:15.578 fused_ordering(287) 00:10:15.578 fused_ordering(288) 00:10:15.578 fused_ordering(289) 00:10:15.578 fused_ordering(290) 00:10:15.578 fused_ordering(291) 00:10:15.578 fused_ordering(292) 00:10:15.578 fused_ordering(293) 00:10:15.578 fused_ordering(294) 00:10:15.578 fused_ordering(295) 00:10:15.578 fused_ordering(296) 00:10:15.578 fused_ordering(297) 00:10:15.578 fused_ordering(298) 00:10:15.578 fused_ordering(299) 00:10:15.578 fused_ordering(300) 00:10:15.578 fused_ordering(301) 00:10:15.578 fused_ordering(302) 00:10:15.578 fused_ordering(303) 00:10:15.578 fused_ordering(304) 00:10:15.578 fused_ordering(305) 00:10:15.578 fused_ordering(306) 00:10:15.578 fused_ordering(307) 00:10:15.578 fused_ordering(308) 00:10:15.578 fused_ordering(309) 00:10:15.578 fused_ordering(310) 00:10:15.578 fused_ordering(311) 00:10:15.578 fused_ordering(312) 00:10:15.578 fused_ordering(313) 00:10:15.578 fused_ordering(314) 00:10:15.578 fused_ordering(315) 00:10:15.578 fused_ordering(316) 00:10:15.578 fused_ordering(317) 00:10:15.578 fused_ordering(318) 00:10:15.578 fused_ordering(319) 00:10:15.578 fused_ordering(320) 00:10:15.578 fused_ordering(321) 00:10:15.578 fused_ordering(322) 00:10:15.578 fused_ordering(323) 00:10:15.578 fused_ordering(324) 00:10:15.578 fused_ordering(325) 00:10:15.578 fused_ordering(326) 00:10:15.578 fused_ordering(327) 00:10:15.578 fused_ordering(328) 00:10:15.578 fused_ordering(329) 00:10:15.578 fused_ordering(330) 00:10:15.578 fused_ordering(331) 00:10:15.578 fused_ordering(332) 00:10:15.578 fused_ordering(333) 00:10:15.578 fused_ordering(334) 00:10:15.578 fused_ordering(335) 00:10:15.578 fused_ordering(336) 00:10:15.578 fused_ordering(337) 00:10:15.578 fused_ordering(338) 00:10:15.578 fused_ordering(339) 00:10:15.578 fused_ordering(340) 00:10:15.578 fused_ordering(341) 00:10:15.578 fused_ordering(342) 00:10:15.578 fused_ordering(343) 00:10:15.578 fused_ordering(344) 00:10:15.578 fused_ordering(345) 00:10:15.578 fused_ordering(346) 00:10:15.578 fused_ordering(347) 00:10:15.578 fused_ordering(348) 00:10:15.578 fused_ordering(349) 00:10:15.578 fused_ordering(350) 00:10:15.578 fused_ordering(351) 00:10:15.578 fused_ordering(352) 00:10:15.578 fused_ordering(353) 00:10:15.578 fused_ordering(354) 00:10:15.578 fused_ordering(355) 00:10:15.578 fused_ordering(356) 00:10:15.578 fused_ordering(357) 00:10:15.578 fused_ordering(358) 00:10:15.578 fused_ordering(359) 00:10:15.578 fused_ordering(360) 00:10:15.578 fused_ordering(361) 00:10:15.578 fused_ordering(362) 00:10:15.578 fused_ordering(363) 00:10:15.578 fused_ordering(364) 00:10:15.578 fused_ordering(365) 00:10:15.578 fused_ordering(366) 00:10:15.578 fused_ordering(367) 00:10:15.578 fused_ordering(368) 00:10:15.578 fused_ordering(369) 00:10:15.578 fused_ordering(370) 00:10:15.578 fused_ordering(371) 00:10:15.578 fused_ordering(372) 00:10:15.578 fused_ordering(373) 00:10:15.578 fused_ordering(374) 00:10:15.578 fused_ordering(375) 00:10:15.578 fused_ordering(376) 00:10:15.578 fused_ordering(377) 00:10:15.578 fused_ordering(378) 00:10:15.578 fused_ordering(379) 00:10:15.578 fused_ordering(380) 00:10:15.578 fused_ordering(381) 00:10:15.578 fused_ordering(382) 00:10:15.578 fused_ordering(383) 00:10:15.578 fused_ordering(384) 00:10:15.578 fused_ordering(385) 00:10:15.578 fused_ordering(386) 00:10:15.578 fused_ordering(387) 00:10:15.578 fused_ordering(388) 00:10:15.578 fused_ordering(389) 00:10:15.578 fused_ordering(390) 00:10:15.578 fused_ordering(391) 00:10:15.578 fused_ordering(392) 00:10:15.578 fused_ordering(393) 00:10:15.578 fused_ordering(394) 00:10:15.578 fused_ordering(395) 00:10:15.578 fused_ordering(396) 00:10:15.578 fused_ordering(397) 00:10:15.578 fused_ordering(398) 00:10:15.578 fused_ordering(399) 00:10:15.578 fused_ordering(400) 00:10:15.578 fused_ordering(401) 00:10:15.578 fused_ordering(402) 00:10:15.578 fused_ordering(403) 00:10:15.578 fused_ordering(404) 00:10:15.578 fused_ordering(405) 00:10:15.578 fused_ordering(406) 00:10:15.578 fused_ordering(407) 00:10:15.578 fused_ordering(408) 00:10:15.578 fused_ordering(409) 00:10:15.578 fused_ordering(410) 00:10:16.146 fused_ordering(411) 00:10:16.146 fused_ordering(412) 00:10:16.146 fused_ordering(413) 00:10:16.146 fused_ordering(414) 00:10:16.146 fused_ordering(415) 00:10:16.146 fused_ordering(416) 00:10:16.146 fused_ordering(417) 00:10:16.146 fused_ordering(418) 00:10:16.146 fused_ordering(419) 00:10:16.146 fused_ordering(420) 00:10:16.146 fused_ordering(421) 00:10:16.146 fused_ordering(422) 00:10:16.146 fused_ordering(423) 00:10:16.146 fused_ordering(424) 00:10:16.146 fused_ordering(425) 00:10:16.146 fused_ordering(426) 00:10:16.147 fused_ordering(427) 00:10:16.147 fused_ordering(428) 00:10:16.147 fused_ordering(429) 00:10:16.147 fused_ordering(430) 00:10:16.147 fused_ordering(431) 00:10:16.147 fused_ordering(432) 00:10:16.147 fused_ordering(433) 00:10:16.147 fused_ordering(434) 00:10:16.147 fused_ordering(435) 00:10:16.147 fused_ordering(436) 00:10:16.147 fused_ordering(437) 00:10:16.147 fused_ordering(438) 00:10:16.147 fused_ordering(439) 00:10:16.147 fused_ordering(440) 00:10:16.147 fused_ordering(441) 00:10:16.147 fused_ordering(442) 00:10:16.147 fused_ordering(443) 00:10:16.147 fused_ordering(444) 00:10:16.147 fused_ordering(445) 00:10:16.147 fused_ordering(446) 00:10:16.147 fused_ordering(447) 00:10:16.147 fused_ordering(448) 00:10:16.147 fused_ordering(449) 00:10:16.147 fused_ordering(450) 00:10:16.147 fused_ordering(451) 00:10:16.147 fused_ordering(452) 00:10:16.147 fused_ordering(453) 00:10:16.147 fused_ordering(454) 00:10:16.147 fused_ordering(455) 00:10:16.147 fused_ordering(456) 00:10:16.147 fused_ordering(457) 00:10:16.147 fused_ordering(458) 00:10:16.147 fused_ordering(459) 00:10:16.147 fused_ordering(460) 00:10:16.147 fused_ordering(461) 00:10:16.147 fused_ordering(462) 00:10:16.147 fused_ordering(463) 00:10:16.147 fused_ordering(464) 00:10:16.147 fused_ordering(465) 00:10:16.147 fused_ordering(466) 00:10:16.147 fused_ordering(467) 00:10:16.147 fused_ordering(468) 00:10:16.147 fused_ordering(469) 00:10:16.147 fused_ordering(470) 00:10:16.147 fused_ordering(471) 00:10:16.147 fused_ordering(472) 00:10:16.147 fused_ordering(473) 00:10:16.147 fused_ordering(474) 00:10:16.147 fused_ordering(475) 00:10:16.147 fused_ordering(476) 00:10:16.147 fused_ordering(477) 00:10:16.147 fused_ordering(478) 00:10:16.147 fused_ordering(479) 00:10:16.147 fused_ordering(480) 00:10:16.147 fused_ordering(481) 00:10:16.147 fused_ordering(482) 00:10:16.147 fused_ordering(483) 00:10:16.147 fused_ordering(484) 00:10:16.147 fused_ordering(485) 00:10:16.147 fused_ordering(486) 00:10:16.147 fused_ordering(487) 00:10:16.147 fused_ordering(488) 00:10:16.147 fused_ordering(489) 00:10:16.147 fused_ordering(490) 00:10:16.147 fused_ordering(491) 00:10:16.147 fused_ordering(492) 00:10:16.147 fused_ordering(493) 00:10:16.147 fused_ordering(494) 00:10:16.147 fused_ordering(495) 00:10:16.147 fused_ordering(496) 00:10:16.147 fused_ordering(497) 00:10:16.147 fused_ordering(498) 00:10:16.147 fused_ordering(499) 00:10:16.147 fused_ordering(500) 00:10:16.147 fused_ordering(501) 00:10:16.147 fused_ordering(502) 00:10:16.147 fused_ordering(503) 00:10:16.147 fused_ordering(504) 00:10:16.147 fused_ordering(505) 00:10:16.147 fused_ordering(506) 00:10:16.147 fused_ordering(507) 00:10:16.147 fused_ordering(508) 00:10:16.147 fused_ordering(509) 00:10:16.147 fused_ordering(510) 00:10:16.147 fused_ordering(511) 00:10:16.147 fused_ordering(512) 00:10:16.147 fused_ordering(513) 00:10:16.147 fused_ordering(514) 00:10:16.147 fused_ordering(515) 00:10:16.147 fused_ordering(516) 00:10:16.147 fused_ordering(517) 00:10:16.147 fused_ordering(518) 00:10:16.147 fused_ordering(519) 00:10:16.147 fused_ordering(520) 00:10:16.147 fused_ordering(521) 00:10:16.147 fused_ordering(522) 00:10:16.147 fused_ordering(523) 00:10:16.147 fused_ordering(524) 00:10:16.147 fused_ordering(525) 00:10:16.147 fused_ordering(526) 00:10:16.147 fused_ordering(527) 00:10:16.147 fused_ordering(528) 00:10:16.147 fused_ordering(529) 00:10:16.147 fused_ordering(530) 00:10:16.147 fused_ordering(531) 00:10:16.147 fused_ordering(532) 00:10:16.147 fused_ordering(533) 00:10:16.147 fused_ordering(534) 00:10:16.147 fused_ordering(535) 00:10:16.147 fused_ordering(536) 00:10:16.147 fused_ordering(537) 00:10:16.147 fused_ordering(538) 00:10:16.147 fused_ordering(539) 00:10:16.147 fused_ordering(540) 00:10:16.147 fused_ordering(541) 00:10:16.147 fused_ordering(542) 00:10:16.147 fused_ordering(543) 00:10:16.147 fused_ordering(544) 00:10:16.147 fused_ordering(545) 00:10:16.147 fused_ordering(546) 00:10:16.147 fused_ordering(547) 00:10:16.147 fused_ordering(548) 00:10:16.147 fused_ordering(549) 00:10:16.147 fused_ordering(550) 00:10:16.147 fused_ordering(551) 00:10:16.147 fused_ordering(552) 00:10:16.147 fused_ordering(553) 00:10:16.147 fused_ordering(554) 00:10:16.147 fused_ordering(555) 00:10:16.147 fused_ordering(556) 00:10:16.147 fused_ordering(557) 00:10:16.147 fused_ordering(558) 00:10:16.147 fused_ordering(559) 00:10:16.147 fused_ordering(560) 00:10:16.147 fused_ordering(561) 00:10:16.147 fused_ordering(562) 00:10:16.147 fused_ordering(563) 00:10:16.147 fused_ordering(564) 00:10:16.147 fused_ordering(565) 00:10:16.147 fused_ordering(566) 00:10:16.147 fused_ordering(567) 00:10:16.147 fused_ordering(568) 00:10:16.147 fused_ordering(569) 00:10:16.147 fused_ordering(570) 00:10:16.147 fused_ordering(571) 00:10:16.147 fused_ordering(572) 00:10:16.147 fused_ordering(573) 00:10:16.147 fused_ordering(574) 00:10:16.147 fused_ordering(575) 00:10:16.147 fused_ordering(576) 00:10:16.147 fused_ordering(577) 00:10:16.147 fused_ordering(578) 00:10:16.147 fused_ordering(579) 00:10:16.147 fused_ordering(580) 00:10:16.147 fused_ordering(581) 00:10:16.147 fused_ordering(582) 00:10:16.147 fused_ordering(583) 00:10:16.147 fused_ordering(584) 00:10:16.147 fused_ordering(585) 00:10:16.147 fused_ordering(586) 00:10:16.147 fused_ordering(587) 00:10:16.147 fused_ordering(588) 00:10:16.147 fused_ordering(589) 00:10:16.147 fused_ordering(590) 00:10:16.147 fused_ordering(591) 00:10:16.147 fused_ordering(592) 00:10:16.147 fused_ordering(593) 00:10:16.147 fused_ordering(594) 00:10:16.147 fused_ordering(595) 00:10:16.147 fused_ordering(596) 00:10:16.147 fused_ordering(597) 00:10:16.147 fused_ordering(598) 00:10:16.147 fused_ordering(599) 00:10:16.147 fused_ordering(600) 00:10:16.147 fused_ordering(601) 00:10:16.147 fused_ordering(602) 00:10:16.147 fused_ordering(603) 00:10:16.147 fused_ordering(604) 00:10:16.147 fused_ordering(605) 00:10:16.147 fused_ordering(606) 00:10:16.147 fused_ordering(607) 00:10:16.147 fused_ordering(608) 00:10:16.147 fused_ordering(609) 00:10:16.147 fused_ordering(610) 00:10:16.147 fused_ordering(611) 00:10:16.147 fused_ordering(612) 00:10:16.147 fused_ordering(613) 00:10:16.147 fused_ordering(614) 00:10:16.147 fused_ordering(615) 00:10:16.407 fused_ordering(616) 00:10:16.407 fused_ordering(617) 00:10:16.407 fused_ordering(618) 00:10:16.407 fused_ordering(619) 00:10:16.407 fused_ordering(620) 00:10:16.407 fused_ordering(621) 00:10:16.407 fused_ordering(622) 00:10:16.407 fused_ordering(623) 00:10:16.407 fused_ordering(624) 00:10:16.407 fused_ordering(625) 00:10:16.407 fused_ordering(626) 00:10:16.407 fused_ordering(627) 00:10:16.407 fused_ordering(628) 00:10:16.407 fused_ordering(629) 00:10:16.407 fused_ordering(630) 00:10:16.407 fused_ordering(631) 00:10:16.407 fused_ordering(632) 00:10:16.407 fused_ordering(633) 00:10:16.407 fused_ordering(634) 00:10:16.407 fused_ordering(635) 00:10:16.407 fused_ordering(636) 00:10:16.407 fused_ordering(637) 00:10:16.407 fused_ordering(638) 00:10:16.407 fused_ordering(639) 00:10:16.407 fused_ordering(640) 00:10:16.407 fused_ordering(641) 00:10:16.407 fused_ordering(642) 00:10:16.407 fused_ordering(643) 00:10:16.407 fused_ordering(644) 00:10:16.407 fused_ordering(645) 00:10:16.407 fused_ordering(646) 00:10:16.407 fused_ordering(647) 00:10:16.407 fused_ordering(648) 00:10:16.407 fused_ordering(649) 00:10:16.407 fused_ordering(650) 00:10:16.407 fused_ordering(651) 00:10:16.407 fused_ordering(652) 00:10:16.407 fused_ordering(653) 00:10:16.407 fused_ordering(654) 00:10:16.407 fused_ordering(655) 00:10:16.407 fused_ordering(656) 00:10:16.407 fused_ordering(657) 00:10:16.407 fused_ordering(658) 00:10:16.407 fused_ordering(659) 00:10:16.407 fused_ordering(660) 00:10:16.407 fused_ordering(661) 00:10:16.407 fused_ordering(662) 00:10:16.407 fused_ordering(663) 00:10:16.407 fused_ordering(664) 00:10:16.407 fused_ordering(665) 00:10:16.407 fused_ordering(666) 00:10:16.407 fused_ordering(667) 00:10:16.407 fused_ordering(668) 00:10:16.407 fused_ordering(669) 00:10:16.407 fused_ordering(670) 00:10:16.407 fused_ordering(671) 00:10:16.407 fused_ordering(672) 00:10:16.407 fused_ordering(673) 00:10:16.407 fused_ordering(674) 00:10:16.407 fused_ordering(675) 00:10:16.407 fused_ordering(676) 00:10:16.407 fused_ordering(677) 00:10:16.407 fused_ordering(678) 00:10:16.407 fused_ordering(679) 00:10:16.407 fused_ordering(680) 00:10:16.407 fused_ordering(681) 00:10:16.407 fused_ordering(682) 00:10:16.407 fused_ordering(683) 00:10:16.407 fused_ordering(684) 00:10:16.407 fused_ordering(685) 00:10:16.407 fused_ordering(686) 00:10:16.407 fused_ordering(687) 00:10:16.407 fused_ordering(688) 00:10:16.407 fused_ordering(689) 00:10:16.407 fused_ordering(690) 00:10:16.407 fused_ordering(691) 00:10:16.407 fused_ordering(692) 00:10:16.407 fused_ordering(693) 00:10:16.407 fused_ordering(694) 00:10:16.407 fused_ordering(695) 00:10:16.407 fused_ordering(696) 00:10:16.407 fused_ordering(697) 00:10:16.407 fused_ordering(698) 00:10:16.407 fused_ordering(699) 00:10:16.407 fused_ordering(700) 00:10:16.407 fused_ordering(701) 00:10:16.407 fused_ordering(702) 00:10:16.407 fused_ordering(703) 00:10:16.407 fused_ordering(704) 00:10:16.407 fused_ordering(705) 00:10:16.407 fused_ordering(706) 00:10:16.407 fused_ordering(707) 00:10:16.407 fused_ordering(708) 00:10:16.407 fused_ordering(709) 00:10:16.407 fused_ordering(710) 00:10:16.407 fused_ordering(711) 00:10:16.407 fused_ordering(712) 00:10:16.407 fused_ordering(713) 00:10:16.407 fused_ordering(714) 00:10:16.407 fused_ordering(715) 00:10:16.407 fused_ordering(716) 00:10:16.407 fused_ordering(717) 00:10:16.407 fused_ordering(718) 00:10:16.407 fused_ordering(719) 00:10:16.407 fused_ordering(720) 00:10:16.407 fused_ordering(721) 00:10:16.407 fused_ordering(722) 00:10:16.407 fused_ordering(723) 00:10:16.407 fused_ordering(724) 00:10:16.407 fused_ordering(725) 00:10:16.407 fused_ordering(726) 00:10:16.407 fused_ordering(727) 00:10:16.407 fused_ordering(728) 00:10:16.407 fused_ordering(729) 00:10:16.407 fused_ordering(730) 00:10:16.407 fused_ordering(731) 00:10:16.407 fused_ordering(732) 00:10:16.407 fused_ordering(733) 00:10:16.407 fused_ordering(734) 00:10:16.407 fused_ordering(735) 00:10:16.407 fused_ordering(736) 00:10:16.407 fused_ordering(737) 00:10:16.407 fused_ordering(738) 00:10:16.407 fused_ordering(739) 00:10:16.407 fused_ordering(740) 00:10:16.407 fused_ordering(741) 00:10:16.407 fused_ordering(742) 00:10:16.407 fused_ordering(743) 00:10:16.407 fused_ordering(744) 00:10:16.407 fused_ordering(745) 00:10:16.407 fused_ordering(746) 00:10:16.407 fused_ordering(747) 00:10:16.407 fused_ordering(748) 00:10:16.407 fused_ordering(749) 00:10:16.407 fused_ordering(750) 00:10:16.407 fused_ordering(751) 00:10:16.407 fused_ordering(752) 00:10:16.407 fused_ordering(753) 00:10:16.407 fused_ordering(754) 00:10:16.407 fused_ordering(755) 00:10:16.407 fused_ordering(756) 00:10:16.407 fused_ordering(757) 00:10:16.407 fused_ordering(758) 00:10:16.407 fused_ordering(759) 00:10:16.407 fused_ordering(760) 00:10:16.407 fused_ordering(761) 00:10:16.407 fused_ordering(762) 00:10:16.407 fused_ordering(763) 00:10:16.407 fused_ordering(764) 00:10:16.407 fused_ordering(765) 00:10:16.407 fused_ordering(766) 00:10:16.407 fused_ordering(767) 00:10:16.407 fused_ordering(768) 00:10:16.407 fused_ordering(769) 00:10:16.407 fused_ordering(770) 00:10:16.407 fused_ordering(771) 00:10:16.407 fused_ordering(772) 00:10:16.407 fused_ordering(773) 00:10:16.407 fused_ordering(774) 00:10:16.407 fused_ordering(775) 00:10:16.407 fused_ordering(776) 00:10:16.407 fused_ordering(777) 00:10:16.407 fused_ordering(778) 00:10:16.407 fused_ordering(779) 00:10:16.407 fused_ordering(780) 00:10:16.407 fused_ordering(781) 00:10:16.407 fused_ordering(782) 00:10:16.407 fused_ordering(783) 00:10:16.407 fused_ordering(784) 00:10:16.407 fused_ordering(785) 00:10:16.407 fused_ordering(786) 00:10:16.407 fused_ordering(787) 00:10:16.407 fused_ordering(788) 00:10:16.407 fused_ordering(789) 00:10:16.407 fused_ordering(790) 00:10:16.407 fused_ordering(791) 00:10:16.407 fused_ordering(792) 00:10:16.407 fused_ordering(793) 00:10:16.407 fused_ordering(794) 00:10:16.407 fused_ordering(795) 00:10:16.407 fused_ordering(796) 00:10:16.407 fused_ordering(797) 00:10:16.407 fused_ordering(798) 00:10:16.407 fused_ordering(799) 00:10:16.407 fused_ordering(800) 00:10:16.407 fused_ordering(801) 00:10:16.407 fused_ordering(802) 00:10:16.407 fused_ordering(803) 00:10:16.407 fused_ordering(804) 00:10:16.407 fused_ordering(805) 00:10:16.407 fused_ordering(806) 00:10:16.407 fused_ordering(807) 00:10:16.407 fused_ordering(808) 00:10:16.407 fused_ordering(809) 00:10:16.407 fused_ordering(810) 00:10:16.407 fused_ordering(811) 00:10:16.407 fused_ordering(812) 00:10:16.407 fused_ordering(813) 00:10:16.407 fused_ordering(814) 00:10:16.407 fused_ordering(815) 00:10:16.407 fused_ordering(816) 00:10:16.407 fused_ordering(817) 00:10:16.407 fused_ordering(818) 00:10:16.407 fused_ordering(819) 00:10:16.407 fused_ordering(820) 00:10:16.976 fused_ordering(821) 00:10:16.976 fused_ordering(822) 00:10:16.976 fused_ordering(823) 00:10:16.976 fused_ordering(824) 00:10:16.976 fused_ordering(825) 00:10:16.976 fused_ordering(826) 00:10:16.976 fused_ordering(827) 00:10:16.976 fused_ordering(828) 00:10:16.976 fused_ordering(829) 00:10:16.976 fused_ordering(830) 00:10:16.976 fused_ordering(831) 00:10:16.976 fused_ordering(832) 00:10:16.976 fused_ordering(833) 00:10:16.976 fused_ordering(834) 00:10:16.976 fused_ordering(835) 00:10:16.976 fused_ordering(836) 00:10:16.976 fused_ordering(837) 00:10:16.976 fused_ordering(838) 00:10:16.976 fused_ordering(839) 00:10:16.976 fused_ordering(840) 00:10:16.976 fused_ordering(841) 00:10:16.976 fused_ordering(842) 00:10:16.976 fused_ordering(843) 00:10:16.976 fused_ordering(844) 00:10:16.976 fused_ordering(845) 00:10:16.976 fused_ordering(846) 00:10:16.976 fused_ordering(847) 00:10:16.976 fused_ordering(848) 00:10:16.976 fused_ordering(849) 00:10:16.976 fused_ordering(850) 00:10:16.976 fused_ordering(851) 00:10:16.976 fused_ordering(852) 00:10:16.976 fused_ordering(853) 00:10:16.976 fused_ordering(854) 00:10:16.976 fused_ordering(855) 00:10:16.976 fused_ordering(856) 00:10:16.976 fused_ordering(857) 00:10:16.976 fused_ordering(858) 00:10:16.976 fused_ordering(859) 00:10:16.976 fused_ordering(860) 00:10:16.976 fused_ordering(861) 00:10:16.976 fused_ordering(862) 00:10:16.976 fused_ordering(863) 00:10:16.976 fused_ordering(864) 00:10:16.976 fused_ordering(865) 00:10:16.976 fused_ordering(866) 00:10:16.976 fused_ordering(867) 00:10:16.976 fused_ordering(868) 00:10:16.976 fused_ordering(869) 00:10:16.976 fused_ordering(870) 00:10:16.976 fused_ordering(871) 00:10:16.976 fused_ordering(872) 00:10:16.976 fused_ordering(873) 00:10:16.976 fused_ordering(874) 00:10:16.976 fused_ordering(875) 00:10:16.976 fused_ordering(876) 00:10:16.976 fused_ordering(877) 00:10:16.976 fused_ordering(878) 00:10:16.976 fused_ordering(879) 00:10:16.976 fused_ordering(880) 00:10:16.976 fused_ordering(881) 00:10:16.976 fused_ordering(882) 00:10:16.976 fused_ordering(883) 00:10:16.976 fused_ordering(884) 00:10:16.976 fused_ordering(885) 00:10:16.976 fused_ordering(886) 00:10:16.976 fused_ordering(887) 00:10:16.976 fused_ordering(888) 00:10:16.976 fused_ordering(889) 00:10:16.976 fused_ordering(890) 00:10:16.976 fused_ordering(891) 00:10:16.976 fused_ordering(892) 00:10:16.976 fused_ordering(893) 00:10:16.976 fused_ordering(894) 00:10:16.976 fused_ordering(895) 00:10:16.976 fused_ordering(896) 00:10:16.976 fused_ordering(897) 00:10:16.976 fused_ordering(898) 00:10:16.976 fused_ordering(899) 00:10:16.976 fused_ordering(900) 00:10:16.976 fused_ordering(901) 00:10:16.976 fused_ordering(902) 00:10:16.976 fused_ordering(903) 00:10:16.976 fused_ordering(904) 00:10:16.976 fused_ordering(905) 00:10:16.976 fused_ordering(906) 00:10:16.976 fused_ordering(907) 00:10:16.976 fused_ordering(908) 00:10:16.976 fused_ordering(909) 00:10:16.976 fused_ordering(910) 00:10:16.976 fused_ordering(911) 00:10:16.976 fused_ordering(912) 00:10:16.976 fused_ordering(913) 00:10:16.976 fused_ordering(914) 00:10:16.976 fused_ordering(915) 00:10:16.976 fused_ordering(916) 00:10:16.976 fused_ordering(917) 00:10:16.976 fused_ordering(918) 00:10:16.976 fused_ordering(919) 00:10:16.976 fused_ordering(920) 00:10:16.976 fused_ordering(921) 00:10:16.976 fused_ordering(922) 00:10:16.976 fused_ordering(923) 00:10:16.976 fused_ordering(924) 00:10:16.976 fused_ordering(925) 00:10:16.976 fused_ordering(926) 00:10:16.976 fused_ordering(927) 00:10:16.976 fused_ordering(928) 00:10:16.976 fused_ordering(929) 00:10:16.976 fused_ordering(930) 00:10:16.976 fused_ordering(931) 00:10:16.976 fused_ordering(932) 00:10:16.976 fused_ordering(933) 00:10:16.976 fused_ordering(934) 00:10:16.976 fused_ordering(935) 00:10:16.976 fused_ordering(936) 00:10:16.976 fused_ordering(937) 00:10:16.976 fused_ordering(938) 00:10:16.976 fused_ordering(939) 00:10:16.976 fused_ordering(940) 00:10:16.976 fused_ordering(941) 00:10:16.976 fused_ordering(942) 00:10:16.976 fused_ordering(943) 00:10:16.976 fused_ordering(944) 00:10:16.976 fused_ordering(945) 00:10:16.976 fused_ordering(946) 00:10:16.976 fused_ordering(947) 00:10:16.976 fused_ordering(948) 00:10:16.976 fused_ordering(949) 00:10:16.976 fused_ordering(950) 00:10:16.976 fused_ordering(951) 00:10:16.976 fused_ordering(952) 00:10:16.976 fused_ordering(953) 00:10:16.976 fused_ordering(954) 00:10:16.976 fused_ordering(955) 00:10:16.976 fused_ordering(956) 00:10:16.976 fused_ordering(957) 00:10:16.976 fused_ordering(958) 00:10:16.976 fused_ordering(959) 00:10:16.976 fused_ordering(960) 00:10:16.976 fused_ordering(961) 00:10:16.976 fused_ordering(962) 00:10:16.976 fused_ordering(963) 00:10:16.976 fused_ordering(964) 00:10:16.976 fused_ordering(965) 00:10:16.976 fused_ordering(966) 00:10:16.976 fused_ordering(967) 00:10:16.976 fused_ordering(968) 00:10:16.976 fused_ordering(969) 00:10:16.976 fused_ordering(970) 00:10:16.976 fused_ordering(971) 00:10:16.976 fused_ordering(972) 00:10:16.976 fused_ordering(973) 00:10:16.976 fused_ordering(974) 00:10:16.976 fused_ordering(975) 00:10:16.976 fused_ordering(976) 00:10:16.976 fused_ordering(977) 00:10:16.976 fused_ordering(978) 00:10:16.976 fused_ordering(979) 00:10:16.976 fused_ordering(980) 00:10:16.976 fused_ordering(981) 00:10:16.976 fused_ordering(982) 00:10:16.976 fused_ordering(983) 00:10:16.976 fused_ordering(984) 00:10:16.976 fused_ordering(985) 00:10:16.976 fused_ordering(986) 00:10:16.976 fused_ordering(987) 00:10:16.976 fused_ordering(988) 00:10:16.976 fused_ordering(989) 00:10:16.976 fused_ordering(990) 00:10:16.976 fused_ordering(991) 00:10:16.976 fused_ordering(992) 00:10:16.976 fused_ordering(993) 00:10:16.977 fused_ordering(994) 00:10:16.977 fused_ordering(995) 00:10:16.977 fused_ordering(996) 00:10:16.977 fused_ordering(997) 00:10:16.977 fused_ordering(998) 00:10:16.977 fused_ordering(999) 00:10:16.977 fused_ordering(1000) 00:10:16.977 fused_ordering(1001) 00:10:16.977 fused_ordering(1002) 00:10:16.977 fused_ordering(1003) 00:10:16.977 fused_ordering(1004) 00:10:16.977 fused_ordering(1005) 00:10:16.977 fused_ordering(1006) 00:10:16.977 fused_ordering(1007) 00:10:16.977 fused_ordering(1008) 00:10:16.977 fused_ordering(1009) 00:10:16.977 fused_ordering(1010) 00:10:16.977 fused_ordering(1011) 00:10:16.977 fused_ordering(1012) 00:10:16.977 fused_ordering(1013) 00:10:16.977 fused_ordering(1014) 00:10:16.977 fused_ordering(1015) 00:10:16.977 fused_ordering(1016) 00:10:16.977 fused_ordering(1017) 00:10:16.977 fused_ordering(1018) 00:10:16.977 fused_ordering(1019) 00:10:16.977 fused_ordering(1020) 00:10:16.977 fused_ordering(1021) 00:10:16.977 fused_ordering(1022) 00:10:16.977 fused_ordering(1023) 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.977 rmmod nvme_tcp 00:10:16.977 rmmod nvme_fabrics 00:10:16.977 rmmod nvme_keyring 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2592969 ']' 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2592969 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2592969 ']' 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2592969 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2592969 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2592969' 00:10:16.977 killing process with pid 2592969 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2592969 00:10:16.977 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2592969 00:10:17.236 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.236 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.236 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.236 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.236 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.236 20:35:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.236 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.236 20:35:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.774 20:35:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.774 00:10:19.774 real 0m10.884s 00:10:19.774 user 0m5.742s 00:10:19.774 sys 0m5.637s 00:10:19.774 20:35:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.774 20:35:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:19.774 ************************************ 00:10:19.774 END TEST nvmf_fused_ordering 00:10:19.774 ************************************ 00:10:19.774 20:35:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:19.774 20:35:53 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:19.774 20:35:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.774 20:35:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.774 20:35:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.774 ************************************ 00:10:19.774 START TEST nvmf_delete_subsystem 00:10:19.774 ************************************ 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:19.774 * Looking for test storage... 00:10:19.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.774 20:35:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.046 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.046 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:25.046 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:25.046 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:25.046 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:25.047 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:25.047 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:25.047 Found net devices under 0000:86:00.0: cvl_0_0 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:25.047 Found net devices under 0000:86:00.1: cvl_0_1 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.047 20:35:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:25.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:10:25.047 00:10:25.047 --- 10.0.0.2 ping statistics --- 00:10:25.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.047 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:10:25.047 00:10:25.047 --- 10.0.0.1 ping statistics --- 00:10:25.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.047 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2596955 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2596955 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2596955 ']' 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.047 20:35:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.047 [2024-07-15 20:35:59.265870] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:25.047 [2024-07-15 20:35:59.265917] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.047 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.048 [2024-07-15 20:35:59.325037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:25.048 [2024-07-15 20:35:59.404796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.048 [2024-07-15 20:35:59.404831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.048 [2024-07-15 20:35:59.404838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.048 [2024-07-15 20:35:59.404844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.048 [2024-07-15 20:35:59.404849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.048 [2024-07-15 20:35:59.404886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.048 [2024-07-15 20:35:59.404889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.616 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.616 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:25.616 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.616 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:25.616 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.906 [2024-07-15 20:36:00.128284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.906 [2024-07-15 20:36:00.144402] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.906 NULL1 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.906 Delay0 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2597072 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:25.906 20:36:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:25.906 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.906 [2024-07-15 20:36:00.218969] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:27.878 20:36:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.878 20:36:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.878 20:36:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Write completed with error (sct=0, sc=8) 00:10:27.878 Read completed with error (sct=0, sc=8) 00:10:27.878 starting I/O failed: -6 00:10:27.879 [2024-07-15 20:36:02.338803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11705c0 is same with the state(5) to be set 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 starting I/O failed: -6 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 [2024-07-15 20:36:02.339251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec6400cfe0 is same with the state(5) to be set 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:27.879 Read completed with error (sct=0, sc=8) 00:10:27.879 Write completed with error (sct=0, sc=8) 00:10:29.258 [2024-07-15 20:36:03.314558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171ac0 is same with the state(5) to be set 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 [2024-07-15 20:36:03.340845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec6400d2f0 is same with the state(5) to be set 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Write completed with error (sct=0, sc=8) 00:10:29.258 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 [2024-07-15 20:36:03.341370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11707a0 is same with the state(5) to be set 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 [2024-07-15 20:36:03.341524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11703e0 is same with the state(5) to be set 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Write completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 Read completed with error (sct=0, sc=8) 00:10:29.259 [2024-07-15 20:36:03.342106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1170000 is same with the state(5) to be set 00:10:29.259 Initializing NVMe Controllers 00:10:29.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.259 Controller IO queue size 128, less than required. 00:10:29.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:29.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:29.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:29.259 Initialization complete. Launching workers. 00:10:29.259 ======================================================== 00:10:29.259 Latency(us) 00:10:29.259 Device Information : IOPS MiB/s Average min max 00:10:29.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.23 0.09 947302.96 653.82 1011211.58 00:10:29.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.95 0.08 867025.91 246.38 1011633.74 00:10:29.259 ======================================================== 00:10:29.259 Total : 349.18 0.17 910989.87 246.38 1011633.74 00:10:29.259 00:10:29.259 [2024-07-15 20:36:03.342673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1171ac0 (9): Bad file descriptor 00:10:29.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:29.259 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.259 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:29.259 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2597072 00:10:29.259 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2597072 00:10:29.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2597072) - No such process 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2597072 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2597072 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2597072 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.517 [2024-07-15 20:36:03.870046] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2597807 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2597807 00:10:29.517 20:36:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:29.517 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.517 [2024-07-15 20:36:03.931697] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:30.082 20:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:30.082 20:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2597807 00:10:30.082 20:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:30.648 20:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:30.648 20:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2597807 00:10:30.648 20:36:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:31.216 20:36:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:31.216 20:36:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2597807 00:10:31.216 20:36:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:31.474 20:36:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:31.474 20:36:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2597807 00:10:31.474 20:36:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:32.038 20:36:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:32.038 20:36:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2597807 00:10:32.038 20:36:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:32.602 20:36:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:32.602 20:36:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2597807 00:10:32.602 20:36:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:32.602 Initializing NVMe Controllers 00:10:32.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.602 Controller IO queue size 128, less than required. 00:10:32.602 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:32.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:32.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:32.602 Initialization complete. Launching workers. 00:10:32.602 ======================================================== 00:10:32.602 Latency(us) 00:10:32.602 Device Information : IOPS MiB/s Average min max 00:10:32.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003511.89 1000169.48 1041180.56 00:10:32.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004873.19 1000270.44 1011341.15 00:10:32.602 ======================================================== 00:10:32.602 Total : 256.00 0.12 1004192.54 1000169.48 1041180.56 00:10:32.602 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2597807 00:10:33.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2597807) - No such process 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2597807 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:33.168 rmmod nvme_tcp 00:10:33.168 rmmod nvme_fabrics 00:10:33.168 rmmod nvme_keyring 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2596955 ']' 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2596955 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2596955 ']' 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2596955 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2596955 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2596955' 00:10:33.168 killing process with pid 2596955 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2596955 00:10:33.168 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2596955 00:10:33.427 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:33.427 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:33.427 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:33.427 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:33.427 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:33.427 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.427 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.427 20:36:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.330 20:36:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:35.330 00:10:35.330 real 0m16.006s 00:10:35.330 user 0m30.159s 00:10:35.330 sys 0m4.921s 00:10:35.330 20:36:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.330 20:36:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:35.330 ************************************ 00:10:35.330 END TEST nvmf_delete_subsystem 00:10:35.330 ************************************ 00:10:35.330 20:36:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:35.330 20:36:09 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:35.588 20:36:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.588 20:36:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.588 20:36:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.588 ************************************ 00:10:35.588 START TEST nvmf_ns_masking 00:10:35.588 ************************************ 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:35.588 * Looking for test storage... 00:10:35.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3f9a671f-5af3-408c-a420-f0be5bc27128 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=89f3aac9-63af-4ea0-96e5-47174a702e14 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f1998805-d94b-4ffc-a0a1-6886d77e71e2 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.588 20:36:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.878 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.878 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.878 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.878 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:10:40.878 00:10:40.878 --- 10.0.0.2 ping statistics --- 00:10:40.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.878 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:10:40.878 00:10:40.878 --- 10.0.0.1 ping statistics --- 00:10:40.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.878 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2602206 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2602206 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2602206 ']' 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.878 20:36:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.879 20:36:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.879 20:36:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.879 20:36:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:40.879 20:36:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:41.138 [2024-07-15 20:36:15.376208] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:41.138 [2024-07-15 20:36:15.376262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.138 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.138 [2024-07-15 20:36:15.432676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.138 [2024-07-15 20:36:15.512177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.138 [2024-07-15 20:36:15.512211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.138 [2024-07-15 20:36:15.512219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.138 [2024-07-15 20:36:15.512230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.138 [2024-07-15 20:36:15.512235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.138 [2024-07-15 20:36:15.512252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.707 20:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.707 20:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:41.707 20:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.707 20:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.707 20:36:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:41.966 20:36:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.966 20:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.966 [2024-07-15 20:36:16.352392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.966 20:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:41.966 20:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:41.966 20:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:42.226 Malloc1 00:10:42.226 20:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:42.512 Malloc2 00:10:42.512 20:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.512 20:36:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:42.771 20:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.771 [2024-07-15 20:36:17.219954] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.771 20:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:42.771 20:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f1998805-d94b-4ffc-a0a1-6886d77e71e2 -a 10.0.0.2 -s 4420 -i 4 00:10:43.030 20:36:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.030 20:36:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:43.030 20:36:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.030 20:36:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:43.030 20:36:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:45.567 [ 0]:0x1 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0bb62f0fdc324d7daf4d4502c1e9a158 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0bb62f0fdc324d7daf4d4502c1e9a158 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:45.567 [ 0]:0x1 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0bb62f0fdc324d7daf4d4502c1e9a158 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0bb62f0fdc324d7daf4d4502c1e9a158 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.567 [ 1]:0x2 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03855abfd3a44e299f1a9e0cf7ea1e8b 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03855abfd3a44e299f1a9e0cf7ea1e8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.567 20:36:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.864 20:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:45.864 20:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:45.864 20:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f1998805-d94b-4ffc-a0a1-6886d77e71e2 -a 10.0.0.2 -s 4420 -i 4 00:10:46.154 20:36:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:46.154 20:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.154 20:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.154 20:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:46.154 20:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:46.154 20:36:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:48.061 [ 0]:0x2 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03855abfd3a44e299f1a9e0cf7ea1e8b 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03855abfd3a44e299f1a9e0cf7ea1e8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:48.061 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:48.320 [ 0]:0x1 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0bb62f0fdc324d7daf4d4502c1e9a158 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0bb62f0fdc324d7daf4d4502c1e9a158 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:48.320 [ 1]:0x2 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:48.320 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03855abfd3a44e299f1a9e0cf7ea1e8b 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03855abfd3a44e299f1a9e0cf7ea1e8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:48.580 20:36:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:48.580 [ 0]:0x2 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:48.580 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:48.839 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03855abfd3a44e299f1a9e0cf7ea1e8b 00:10:48.839 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03855abfd3a44e299f1a9e0cf7ea1e8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:48.839 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:48.839 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.839 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:49.099 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:49.099 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f1998805-d94b-4ffc-a0a1-6886d77e71e2 -a 10.0.0.2 -s 4420 -i 4 00:10:49.099 20:36:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:49.099 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.099 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.099 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:49.099 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:49.099 20:36:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:51.637 [ 0]:0x1 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0bb62f0fdc324d7daf4d4502c1e9a158 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0bb62f0fdc324d7daf4d4502c1e9a158 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:51.637 [ 1]:0x2 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03855abfd3a44e299f1a9e0cf7ea1e8b 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03855abfd3a44e299f1a9e0cf7ea1e8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:51.637 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.638 [ 0]:0x2 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03855abfd3a44e299f1a9e0cf7ea1e8b 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03855abfd3a44e299f1a9e0cf7ea1e8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:51.638 20:36:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:51.897 [2024-07-15 20:36:26.157366] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:51.897 request: 00:10:51.897 { 00:10:51.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.897 "nsid": 2, 00:10:51.897 "host": "nqn.2016-06.io.spdk:host1", 00:10:51.897 "method": "nvmf_ns_remove_host", 00:10:51.897 "req_id": 1 00:10:51.897 } 00:10:51.897 Got JSON-RPC error response 00:10:51.897 response: 00:10:51.897 { 00:10:51.897 "code": -32602, 00:10:51.897 "message": "Invalid parameters" 00:10:51.897 } 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:51.897 [ 0]:0x2 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=03855abfd3a44e299f1a9e0cf7ea1e8b 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 03855abfd3a44e299f1a9e0cf7ea1e8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:51.897 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2604204 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2604204 /var/tmp/host.sock 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2604204 ']' 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:52.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.156 20:36:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:52.156 [2024-07-15 20:36:26.515311] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:10:52.156 [2024-07-15 20:36:26.515356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604204 ] 00:10:52.156 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.156 [2024-07-15 20:36:26.568755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.415 [2024-07-15 20:36:26.647830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.983 20:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.983 20:36:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:52.983 20:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.241 20:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.241 20:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3f9a671f-5af3-408c-a420-f0be5bc27128 00:10:53.241 20:36:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:53.241 20:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3F9A671F5AF3408CA420F0BE5BC27128 -i 00:10:53.499 20:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 89f3aac9-63af-4ea0-96e5-47174a702e14 00:10:53.499 20:36:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:53.499 20:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 89F3AAC963AF4EA096E547174A702E14 -i 00:10:53.757 20:36:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:53.757 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:54.014 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:54.014 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:54.272 nvme0n1 00:10:54.272 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:54.272 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:54.529 nvme1n2 00:10:54.529 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:54.529 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:54.529 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:54.529 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:54.529 20:36:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:54.789 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:54.789 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:54.789 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:54.789 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3f9a671f-5af3-408c-a420-f0be5bc27128 == \3\f\9\a\6\7\1\f\-\5\a\f\3\-\4\0\8\c\-\a\4\2\0\-\f\0\b\e\5\b\c\2\7\1\2\8 ]] 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 89f3aac9-63af-4ea0-96e5-47174a702e14 == \8\9\f\3\a\a\c\9\-\6\3\a\f\-\4\e\a\0\-\9\6\e\5\-\4\7\1\7\4\a\7\0\2\e\1\4 ]] 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2604204 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2604204 ']' 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2604204 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.047 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2604204 00:10:55.305 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:55.305 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:55.305 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2604204' 00:10:55.305 killing process with pid 2604204 00:10:55.305 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2604204 00:10:55.305 20:36:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2604204 00:10:55.572 20:36:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.572 20:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:55.572 20:36:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:55.572 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.572 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:55.572 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.572 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:55.572 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.572 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.572 rmmod nvme_tcp 00:10:55.572 rmmod nvme_fabrics 00:10:55.836 rmmod nvme_keyring 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2602206 ']' 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2602206 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2602206 ']' 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2602206 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2602206 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2602206' 00:10:55.836 killing process with pid 2602206 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2602206 00:10:55.836 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2602206 00:10:56.095 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.095 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.095 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.095 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.095 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.095 20:36:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.095 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.095 20:36:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.001 20:36:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.001 00:10:58.001 real 0m22.574s 00:10:58.001 user 0m24.241s 00:10:58.001 sys 0m6.104s 00:10:58.001 20:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.001 20:36:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:58.001 ************************************ 00:10:58.001 END TEST nvmf_ns_masking 00:10:58.001 ************************************ 00:10:58.001 20:36:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:58.001 20:36:32 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:58.001 20:36:32 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:58.001 20:36:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.001 20:36:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.001 20:36:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.261 ************************************ 00:10:58.261 START TEST nvmf_nvme_cli 00:10:58.261 ************************************ 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:58.261 * Looking for test storage... 00:10:58.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.261 20:36:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.538 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:03.539 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:03.539 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:03.539 Found net devices under 0000:86:00.0: cvl_0_0 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:03.539 Found net devices under 0000:86:00.1: cvl_0_1 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:11:03.539 00:11:03.539 --- 10.0.0.2 ping statistics --- 00:11:03.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.539 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:11:03.539 00:11:03.539 --- 10.0.0.1 ping statistics --- 00:11:03.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.539 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2608205 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2608205 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2608205 ']' 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.539 20:36:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.539 [2024-07-15 20:36:37.569520] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:03.539 [2024-07-15 20:36:37.569564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.539 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.539 [2024-07-15 20:36:37.626965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.539 [2024-07-15 20:36:37.708776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.539 [2024-07-15 20:36:37.708812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.539 [2024-07-15 20:36:37.708819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.539 [2024-07-15 20:36:37.708825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.539 [2024-07-15 20:36:37.708830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.539 [2024-07-15 20:36:37.708873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.539 [2024-07-15 20:36:37.708966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.539 [2024-07-15 20:36:37.709051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.539 [2024-07-15 20:36:37.709052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 [2024-07-15 20:36:38.422256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 Malloc0 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 Malloc1 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 [2024-07-15 20:36:38.503318] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.105 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:04.364 00:11:04.364 Discovery Log Number of Records 2, Generation counter 2 00:11:04.364 =====Discovery Log Entry 0====== 00:11:04.364 trtype: tcp 00:11:04.364 adrfam: ipv4 00:11:04.364 subtype: current discovery subsystem 00:11:04.364 treq: not required 00:11:04.364 portid: 0 00:11:04.364 trsvcid: 4420 00:11:04.364 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:04.364 traddr: 10.0.0.2 00:11:04.364 eflags: explicit discovery connections, duplicate discovery information 00:11:04.364 sectype: none 00:11:04.364 =====Discovery Log Entry 1====== 00:11:04.364 trtype: tcp 00:11:04.364 adrfam: ipv4 00:11:04.364 subtype: nvme subsystem 00:11:04.364 treq: not required 00:11:04.364 portid: 0 00:11:04.364 trsvcid: 4420 00:11:04.364 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:04.364 traddr: 10.0.0.2 00:11:04.364 eflags: none 00:11:04.365 sectype: none 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:04.365 20:36:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.378 20:36:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:05.378 20:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:05.378 20:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.378 20:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:05.378 20:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:05.378 20:36:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.912 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:07.913 /dev/nvme0n1 ]] 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.913 20:36:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:07.913 20:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.172 rmmod nvme_tcp 00:11:08.172 rmmod nvme_fabrics 00:11:08.172 rmmod nvme_keyring 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2608205 ']' 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2608205 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2608205 ']' 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2608205 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2608205 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2608205' 00:11:08.172 killing process with pid 2608205 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2608205 00:11:08.172 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2608205 00:11:08.431 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.431 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.431 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.431 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.431 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.431 20:36:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.431 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.431 20:36:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.965 20:36:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.965 00:11:10.965 real 0m12.353s 00:11:10.965 user 0m21.599s 00:11:10.965 sys 0m4.290s 00:11:10.965 20:36:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.965 20:36:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:10.965 ************************************ 00:11:10.965 END TEST nvmf_nvme_cli 00:11:10.965 ************************************ 00:11:10.965 20:36:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:10.965 20:36:44 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:10.965 20:36:44 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:10.965 20:36:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.965 20:36:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.965 20:36:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.965 ************************************ 00:11:10.965 START TEST nvmf_vfio_user 00:11:10.965 ************************************ 00:11:10.965 20:36:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:10.965 * Looking for test storage... 00:11:10.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.965 20:36:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2609711 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2609711' 00:11:10.966 Process pid: 2609711 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2609711 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2609711 ']' 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.966 20:36:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:10.966 [2024-07-15 20:36:45.085727] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:10.966 [2024-07-15 20:36:45.085776] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.966 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.966 [2024-07-15 20:36:45.140835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.966 [2024-07-15 20:36:45.215159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.966 [2024-07-15 20:36:45.215200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.966 [2024-07-15 20:36:45.215207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.966 [2024-07-15 20:36:45.215213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.966 [2024-07-15 20:36:45.215218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.966 [2024-07-15 20:36:45.215274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.966 [2024-07-15 20:36:45.215372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.966 [2024-07-15 20:36:45.215463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.966 [2024-07-15 20:36:45.215463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.533 20:36:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.533 20:36:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:11.533 20:36:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:12.470 20:36:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:12.728 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:12.728 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:12.728 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:12.728 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:12.728 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:12.985 Malloc1 00:11:12.985 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:13.243 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:13.243 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:13.501 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:13.501 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:13.501 20:36:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:13.760 Malloc2 00:11:13.760 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:13.760 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:14.019 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:14.279 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:14.279 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:14.279 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:14.279 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:14.279 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:14.279 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:14.279 [2024-07-15 20:36:48.621595] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:14.279 [2024-07-15 20:36:48.621637] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610207 ] 00:11:14.279 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.279 [2024-07-15 20:36:48.650775] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:14.279 [2024-07-15 20:36:48.653097] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:14.279 [2024-07-15 20:36:48.653116] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fde157fd000 00:11:14.279 [2024-07-15 20:36:48.654097] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.279 [2024-07-15 20:36:48.655088] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.279 [2024-07-15 20:36:48.656100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.279 [2024-07-15 20:36:48.657105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:14.279 [2024-07-15 20:36:48.658114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:14.279 [2024-07-15 20:36:48.659119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.279 [2024-07-15 20:36:48.660126] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:14.279 [2024-07-15 20:36:48.661128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:14.279 [2024-07-15 20:36:48.662131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:14.279 [2024-07-15 20:36:48.662141] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fde157f2000 00:11:14.279 [2024-07-15 20:36:48.663083] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:14.279 [2024-07-15 20:36:48.675704] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:14.279 [2024-07-15 20:36:48.675728] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:14.279 [2024-07-15 20:36:48.678236] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:14.279 [2024-07-15 20:36:48.678273] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:14.279 [2024-07-15 20:36:48.678343] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:14.279 [2024-07-15 20:36:48.678364] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:14.279 [2024-07-15 20:36:48.678370] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:14.279 [2024-07-15 20:36:48.679232] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:14.279 [2024-07-15 20:36:48.679242] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:14.279 [2024-07-15 20:36:48.679248] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:14.279 [2024-07-15 20:36:48.680235] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:14.279 [2024-07-15 20:36:48.680245] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:14.279 [2024-07-15 20:36:48.680251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:14.279 [2024-07-15 20:36:48.681241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:14.279 [2024-07-15 20:36:48.681249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:14.279 [2024-07-15 20:36:48.682250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:14.279 [2024-07-15 20:36:48.682257] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:14.279 [2024-07-15 20:36:48.682261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:14.279 [2024-07-15 20:36:48.682267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:14.280 [2024-07-15 20:36:48.682372] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:14.280 [2024-07-15 20:36:48.682376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:14.280 [2024-07-15 20:36:48.682381] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:14.280 [2024-07-15 20:36:48.683256] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:14.280 [2024-07-15 20:36:48.684259] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:14.280 [2024-07-15 20:36:48.685266] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:14.280 [2024-07-15 20:36:48.686264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:14.280 [2024-07-15 20:36:48.686339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:14.280 [2024-07-15 20:36:48.690234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:14.280 [2024-07-15 20:36:48.690244] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:14.280 [2024-07-15 20:36:48.690250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:14.280 [2024-07-15 20:36:48.690274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690289] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:14.280 [2024-07-15 20:36:48.690294] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:14.280 [2024-07-15 20:36:48.690306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690357] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:14.280 [2024-07-15 20:36:48.690365] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:14.280 [2024-07-15 20:36:48.690369] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:14.280 [2024-07-15 20:36:48.690373] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:14.280 [2024-07-15 20:36:48.690377] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:14.280 [2024-07-15 20:36:48.690381] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:14.280 [2024-07-15 20:36:48.690385] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.280 [2024-07-15 20:36:48.690433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.280 [2024-07-15 20:36:48.690440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.280 [2024-07-15 20:36:48.690448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.280 [2024-07-15 20:36:48.690452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690482] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:14.280 [2024-07-15 20:36:48.690487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690492] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690581] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:14.280 [2024-07-15 20:36:48.690584] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:14.280 [2024-07-15 20:36:48.690590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690617] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:14.280 [2024-07-15 20:36:48.690625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690637] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:14.280 [2024-07-15 20:36:48.690641] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:14.280 [2024-07-15 20:36:48.690646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690686] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:14.280 [2024-07-15 20:36:48.690690] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:14.280 [2024-07-15 20:36:48.690695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690746] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:14.280 [2024-07-15 20:36:48.690750] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:14.280 [2024-07-15 20:36:48.690754] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:14.280 [2024-07-15 20:36:48.690771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:14.280 [2024-07-15 20:36:48.690811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:14.280 [2024-07-15 20:36:48.690821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:14.281 [2024-07-15 20:36:48.690831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:14.281 [2024-07-15 20:36:48.690842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:14.281 [2024-07-15 20:36:48.690854] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:14.281 [2024-07-15 20:36:48.690858] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:14.281 [2024-07-15 20:36:48.690861] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:14.281 [2024-07-15 20:36:48.690864] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:14.281 [2024-07-15 20:36:48.690870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:14.281 [2024-07-15 20:36:48.690876] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:14.281 [2024-07-15 20:36:48.690880] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:14.281 [2024-07-15 20:36:48.690886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:14.281 [2024-07-15 20:36:48.690891] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:14.281 [2024-07-15 20:36:48.690895] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:14.281 [2024-07-15 20:36:48.690901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:14.281 [2024-07-15 20:36:48.690909] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:14.281 [2024-07-15 20:36:48.690913] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:14.281 [2024-07-15 20:36:48.690918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:14.281 [2024-07-15 20:36:48.690924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:14.281 [2024-07-15 20:36:48.690935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:14.281 [2024-07-15 20:36:48.690945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:14.281 [2024-07-15 20:36:48.690951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:14.281 ===================================================== 00:11:14.281 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:14.281 ===================================================== 00:11:14.281 Controller Capabilities/Features 00:11:14.281 ================================ 00:11:14.281 Vendor ID: 4e58 00:11:14.281 Subsystem Vendor ID: 4e58 00:11:14.281 Serial Number: SPDK1 00:11:14.281 Model Number: SPDK bdev Controller 00:11:14.281 Firmware Version: 24.09 00:11:14.281 Recommended Arb Burst: 6 00:11:14.281 IEEE OUI Identifier: 8d 6b 50 00:11:14.281 Multi-path I/O 00:11:14.281 May have multiple subsystem ports: Yes 00:11:14.281 May have multiple controllers: Yes 00:11:14.281 Associated with SR-IOV VF: No 00:11:14.281 Max Data Transfer Size: 131072 00:11:14.281 Max Number of Namespaces: 32 00:11:14.281 Max Number of I/O Queues: 127 00:11:14.281 NVMe Specification Version (VS): 1.3 00:11:14.281 NVMe Specification Version (Identify): 1.3 00:11:14.281 Maximum Queue Entries: 256 00:11:14.281 Contiguous Queues Required: Yes 00:11:14.281 Arbitration Mechanisms Supported 00:11:14.281 Weighted Round Robin: Not Supported 00:11:14.281 Vendor Specific: Not Supported 00:11:14.281 Reset Timeout: 15000 ms 00:11:14.281 Doorbell Stride: 4 bytes 00:11:14.281 NVM Subsystem Reset: Not Supported 00:11:14.281 Command Sets Supported 00:11:14.281 NVM Command Set: Supported 00:11:14.281 Boot Partition: Not Supported 00:11:14.281 Memory Page Size Minimum: 4096 bytes 00:11:14.281 Memory Page Size Maximum: 4096 bytes 00:11:14.281 Persistent Memory Region: Not Supported 00:11:14.281 Optional Asynchronous Events Supported 00:11:14.281 Namespace Attribute Notices: Supported 00:11:14.281 Firmware Activation Notices: Not Supported 00:11:14.281 ANA Change Notices: Not Supported 00:11:14.281 PLE Aggregate Log Change Notices: Not Supported 00:11:14.281 LBA Status Info Alert Notices: Not Supported 00:11:14.281 EGE Aggregate Log Change Notices: Not Supported 00:11:14.281 Normal NVM Subsystem Shutdown event: Not Supported 00:11:14.281 Zone Descriptor Change Notices: Not Supported 00:11:14.281 Discovery Log Change Notices: Not Supported 00:11:14.281 Controller Attributes 00:11:14.281 128-bit Host Identifier: Supported 00:11:14.281 Non-Operational Permissive Mode: Not Supported 00:11:14.281 NVM Sets: Not Supported 00:11:14.281 Read Recovery Levels: Not Supported 00:11:14.281 Endurance Groups: Not Supported 00:11:14.281 Predictable Latency Mode: Not Supported 00:11:14.281 Traffic Based Keep ALive: Not Supported 00:11:14.281 Namespace Granularity: Not Supported 00:11:14.281 SQ Associations: Not Supported 00:11:14.281 UUID List: Not Supported 00:11:14.281 Multi-Domain Subsystem: Not Supported 00:11:14.281 Fixed Capacity Management: Not Supported 00:11:14.281 Variable Capacity Management: Not Supported 00:11:14.281 Delete Endurance Group: Not Supported 00:11:14.281 Delete NVM Set: Not Supported 00:11:14.281 Extended LBA Formats Supported: Not Supported 00:11:14.281 Flexible Data Placement Supported: Not Supported 00:11:14.281 00:11:14.281 Controller Memory Buffer Support 00:11:14.281 ================================ 00:11:14.281 Supported: No 00:11:14.281 00:11:14.281 Persistent Memory Region Support 00:11:14.281 ================================ 00:11:14.281 Supported: No 00:11:14.281 00:11:14.281 Admin Command Set Attributes 00:11:14.281 ============================ 00:11:14.281 Security Send/Receive: Not Supported 00:11:14.281 Format NVM: Not Supported 00:11:14.281 Firmware Activate/Download: Not Supported 00:11:14.281 Namespace Management: Not Supported 00:11:14.281 Device Self-Test: Not Supported 00:11:14.281 Directives: Not Supported 00:11:14.281 NVMe-MI: Not Supported 00:11:14.281 Virtualization Management: Not Supported 00:11:14.281 Doorbell Buffer Config: Not Supported 00:11:14.281 Get LBA Status Capability: Not Supported 00:11:14.281 Command & Feature Lockdown Capability: Not Supported 00:11:14.281 Abort Command Limit: 4 00:11:14.281 Async Event Request Limit: 4 00:11:14.281 Number of Firmware Slots: N/A 00:11:14.281 Firmware Slot 1 Read-Only: N/A 00:11:14.281 Firmware Activation Without Reset: N/A 00:11:14.281 Multiple Update Detection Support: N/A 00:11:14.281 Firmware Update Granularity: No Information Provided 00:11:14.281 Per-Namespace SMART Log: No 00:11:14.281 Asymmetric Namespace Access Log Page: Not Supported 00:11:14.281 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:14.281 Command Effects Log Page: Supported 00:11:14.281 Get Log Page Extended Data: Supported 00:11:14.281 Telemetry Log Pages: Not Supported 00:11:14.281 Persistent Event Log Pages: Not Supported 00:11:14.281 Supported Log Pages Log Page: May Support 00:11:14.281 Commands Supported & Effects Log Page: Not Supported 00:11:14.281 Feature Identifiers & Effects Log Page:May Support 00:11:14.281 NVMe-MI Commands & Effects Log Page: May Support 00:11:14.281 Data Area 4 for Telemetry Log: Not Supported 00:11:14.281 Error Log Page Entries Supported: 128 00:11:14.281 Keep Alive: Supported 00:11:14.281 Keep Alive Granularity: 10000 ms 00:11:14.281 00:11:14.281 NVM Command Set Attributes 00:11:14.281 ========================== 00:11:14.281 Submission Queue Entry Size 00:11:14.281 Max: 64 00:11:14.281 Min: 64 00:11:14.281 Completion Queue Entry Size 00:11:14.281 Max: 16 00:11:14.281 Min: 16 00:11:14.281 Number of Namespaces: 32 00:11:14.281 Compare Command: Supported 00:11:14.281 Write Uncorrectable Command: Not Supported 00:11:14.281 Dataset Management Command: Supported 00:11:14.281 Write Zeroes Command: Supported 00:11:14.281 Set Features Save Field: Not Supported 00:11:14.281 Reservations: Not Supported 00:11:14.281 Timestamp: Not Supported 00:11:14.281 Copy: Supported 00:11:14.281 Volatile Write Cache: Present 00:11:14.281 Atomic Write Unit (Normal): 1 00:11:14.281 Atomic Write Unit (PFail): 1 00:11:14.281 Atomic Compare & Write Unit: 1 00:11:14.281 Fused Compare & Write: Supported 00:11:14.281 Scatter-Gather List 00:11:14.281 SGL Command Set: Supported (Dword aligned) 00:11:14.281 SGL Keyed: Not Supported 00:11:14.281 SGL Bit Bucket Descriptor: Not Supported 00:11:14.281 SGL Metadata Pointer: Not Supported 00:11:14.281 Oversized SGL: Not Supported 00:11:14.281 SGL Metadata Address: Not Supported 00:11:14.281 SGL Offset: Not Supported 00:11:14.281 Transport SGL Data Block: Not Supported 00:11:14.281 Replay Protected Memory Block: Not Supported 00:11:14.281 00:11:14.281 Firmware Slot Information 00:11:14.281 ========================= 00:11:14.281 Active slot: 1 00:11:14.281 Slot 1 Firmware Revision: 24.09 00:11:14.282 00:11:14.282 00:11:14.282 Commands Supported and Effects 00:11:14.282 ============================== 00:11:14.282 Admin Commands 00:11:14.282 -------------- 00:11:14.282 Get Log Page (02h): Supported 00:11:14.282 Identify (06h): Supported 00:11:14.282 Abort (08h): Supported 00:11:14.282 Set Features (09h): Supported 00:11:14.282 Get Features (0Ah): Supported 00:11:14.282 Asynchronous Event Request (0Ch): Supported 00:11:14.282 Keep Alive (18h): Supported 00:11:14.282 I/O Commands 00:11:14.282 ------------ 00:11:14.282 Flush (00h): Supported LBA-Change 00:11:14.282 Write (01h): Supported LBA-Change 00:11:14.282 Read (02h): Supported 00:11:14.282 Compare (05h): Supported 00:11:14.282 Write Zeroes (08h): Supported LBA-Change 00:11:14.282 Dataset Management (09h): Supported LBA-Change 00:11:14.282 Copy (19h): Supported LBA-Change 00:11:14.282 00:11:14.282 Error Log 00:11:14.282 ========= 00:11:14.282 00:11:14.282 Arbitration 00:11:14.282 =========== 00:11:14.282 Arbitration Burst: 1 00:11:14.282 00:11:14.282 Power Management 00:11:14.282 ================ 00:11:14.282 Number of Power States: 1 00:11:14.282 Current Power State: Power State #0 00:11:14.282 Power State #0: 00:11:14.282 Max Power: 0.00 W 00:11:14.282 Non-Operational State: Operational 00:11:14.282 Entry Latency: Not Reported 00:11:14.282 Exit Latency: Not Reported 00:11:14.282 Relative Read Throughput: 0 00:11:14.282 Relative Read Latency: 0 00:11:14.282 Relative Write Throughput: 0 00:11:14.282 Relative Write Latency: 0 00:11:14.282 Idle Power: Not Reported 00:11:14.282 Active Power: Not Reported 00:11:14.282 Non-Operational Permissive Mode: Not Supported 00:11:14.282 00:11:14.282 Health Information 00:11:14.282 ================== 00:11:14.282 Critical Warnings: 00:11:14.282 Available Spare Space: OK 00:11:14.282 Temperature: OK 00:11:14.282 Device Reliability: OK 00:11:14.282 Read Only: No 00:11:14.282 Volatile Memory Backup: OK 00:11:14.282 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:14.282 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:14.282 Available Spare: 0% 00:11:14.282 Available Sp[2024-07-15 20:36:48.691037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:14.282 [2024-07-15 20:36:48.691046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:14.282 [2024-07-15 20:36:48.691071] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:14.282 [2024-07-15 20:36:48.691080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.282 [2024-07-15 20:36:48.691085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.282 [2024-07-15 20:36:48.691091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.282 [2024-07-15 20:36:48.691096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.282 [2024-07-15 20:36:48.691295] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:14.282 [2024-07-15 20:36:48.691305] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:14.282 [2024-07-15 20:36:48.692300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:14.282 [2024-07-15 20:36:48.692345] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:14.282 [2024-07-15 20:36:48.692352] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:14.282 [2024-07-15 20:36:48.693310] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:14.282 [2024-07-15 20:36:48.693319] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:14.282 [2024-07-15 20:36:48.693366] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:14.282 [2024-07-15 20:36:48.695345] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:14.282 are Threshold: 0% 00:11:14.282 Life Percentage Used: 0% 00:11:14.282 Data Units Read: 0 00:11:14.282 Data Units Written: 0 00:11:14.282 Host Read Commands: 0 00:11:14.282 Host Write Commands: 0 00:11:14.282 Controller Busy Time: 0 minutes 00:11:14.282 Power Cycles: 0 00:11:14.282 Power On Hours: 0 hours 00:11:14.282 Unsafe Shutdowns: 0 00:11:14.282 Unrecoverable Media Errors: 0 00:11:14.282 Lifetime Error Log Entries: 0 00:11:14.282 Warning Temperature Time: 0 minutes 00:11:14.282 Critical Temperature Time: 0 minutes 00:11:14.282 00:11:14.282 Number of Queues 00:11:14.282 ================ 00:11:14.282 Number of I/O Submission Queues: 127 00:11:14.282 Number of I/O Completion Queues: 127 00:11:14.282 00:11:14.282 Active Namespaces 00:11:14.282 ================= 00:11:14.282 Namespace ID:1 00:11:14.282 Error Recovery Timeout: Unlimited 00:11:14.282 Command Set Identifier: NVM (00h) 00:11:14.282 Deallocate: Supported 00:11:14.282 Deallocated/Unwritten Error: Not Supported 00:11:14.282 Deallocated Read Value: Unknown 00:11:14.282 Deallocate in Write Zeroes: Not Supported 00:11:14.282 Deallocated Guard Field: 0xFFFF 00:11:14.282 Flush: Supported 00:11:14.282 Reservation: Supported 00:11:14.282 Namespace Sharing Capabilities: Multiple Controllers 00:11:14.282 Size (in LBAs): 131072 (0GiB) 00:11:14.282 Capacity (in LBAs): 131072 (0GiB) 00:11:14.282 Utilization (in LBAs): 131072 (0GiB) 00:11:14.282 NGUID: 83046F14443E46208A46B41BD47BA08E 00:11:14.282 UUID: 83046f14-443e-4620-8a46-b41bd47ba08e 00:11:14.282 Thin Provisioning: Not Supported 00:11:14.282 Per-NS Atomic Units: Yes 00:11:14.282 Atomic Boundary Size (Normal): 0 00:11:14.282 Atomic Boundary Size (PFail): 0 00:11:14.282 Atomic Boundary Offset: 0 00:11:14.282 Maximum Single Source Range Length: 65535 00:11:14.282 Maximum Copy Length: 65535 00:11:14.282 Maximum Source Range Count: 1 00:11:14.282 NGUID/EUI64 Never Reused: No 00:11:14.282 Namespace Write Protected: No 00:11:14.282 Number of LBA Formats: 1 00:11:14.282 Current LBA Format: LBA Format #00 00:11:14.282 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:14.282 00:11:14.282 20:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:14.541 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.541 [2024-07-15 20:36:48.906996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:19.816 Initializing NVMe Controllers 00:11:19.816 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:19.816 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:19.816 Initialization complete. Launching workers. 00:11:19.816 ======================================================== 00:11:19.816 Latency(us) 00:11:19.816 Device Information : IOPS MiB/s Average min max 00:11:19.816 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39918.99 155.93 3206.08 968.58 6734.00 00:11:19.816 ======================================================== 00:11:19.816 Total : 39918.99 155.93 3206.08 968.58 6734.00 00:11:19.816 00:11:19.816 [2024-07-15 20:36:53.925844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:19.816 20:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:19.816 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.816 [2024-07-15 20:36:54.153936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:25.087 Initializing NVMe Controllers 00:11:25.087 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:25.087 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:25.087 Initialization complete. Launching workers. 00:11:25.087 ======================================================== 00:11:25.087 Latency(us) 00:11:25.087 Device Information : IOPS MiB/s Average min max 00:11:25.087 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15860.23 61.95 8069.81 7579.97 15975.56 00:11:25.087 ======================================================== 00:11:25.087 Total : 15860.23 61.95 8069.81 7579.97 15975.56 00:11:25.088 00:11:25.088 [2024-07-15 20:36:59.190168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:25.088 20:36:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:25.088 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.088 [2024-07-15 20:36:59.375096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:30.432 [2024-07-15 20:37:04.451562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:30.432 Initializing NVMe Controllers 00:11:30.433 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:30.433 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:30.433 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:30.433 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:30.433 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:30.433 Initialization complete. Launching workers. 00:11:30.433 Starting thread on core 2 00:11:30.433 Starting thread on core 3 00:11:30.433 Starting thread on core 1 00:11:30.433 20:37:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:30.433 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.433 [2024-07-15 20:37:04.731609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:34.619 [2024-07-15 20:37:08.339444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:34.619 Initializing NVMe Controllers 00:11:34.619 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:34.619 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:34.619 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:34.619 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:34.619 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:34.619 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:34.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:34.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:34.619 Initialization complete. Launching workers. 00:11:34.619 Starting thread on core 1 with urgent priority queue 00:11:34.619 Starting thread on core 2 with urgent priority queue 00:11:34.619 Starting thread on core 3 with urgent priority queue 00:11:34.619 Starting thread on core 0 with urgent priority queue 00:11:34.619 SPDK bdev Controller (SPDK1 ) core 0: 1143.33 IO/s 87.46 secs/100000 ios 00:11:34.619 SPDK bdev Controller (SPDK1 ) core 1: 1322.67 IO/s 75.60 secs/100000 ios 00:11:34.619 SPDK bdev Controller (SPDK1 ) core 2: 1315.33 IO/s 76.03 secs/100000 ios 00:11:34.619 SPDK bdev Controller (SPDK1 ) core 3: 1536.00 IO/s 65.10 secs/100000 ios 00:11:34.619 ======================================================== 00:11:34.619 00:11:34.619 20:37:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:34.619 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.619 [2024-07-15 20:37:08.612748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:34.619 Initializing NVMe Controllers 00:11:34.619 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:34.619 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:34.619 Namespace ID: 1 size: 0GB 00:11:34.619 Initialization complete. 00:11:34.619 INFO: using host memory buffer for IO 00:11:34.619 Hello world! 00:11:34.619 [2024-07-15 20:37:08.649971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:34.619 20:37:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:34.619 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.619 [2024-07-15 20:37:08.912613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:35.556 Initializing NVMe Controllers 00:11:35.556 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:35.556 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:35.556 Initialization complete. Launching workers. 00:11:35.556 submit (in ns) avg, min, max = 7458.8, 3229.6, 4996257.4 00:11:35.556 complete (in ns) avg, min, max = 20620.0, 1778.3, 4000995.7 00:11:35.556 00:11:35.556 Submit histogram 00:11:35.556 ================ 00:11:35.556 Range in us Cumulative Count 00:11:35.556 3.228 - 3.242: 0.0061% ( 1) 00:11:35.556 3.242 - 3.256: 0.0183% ( 2) 00:11:35.556 3.256 - 3.270: 0.0306% ( 2) 00:11:35.556 3.270 - 3.283: 0.0978% ( 11) 00:11:35.556 3.283 - 3.297: 0.2017% ( 17) 00:11:35.556 3.297 - 3.311: 0.2934% ( 15) 00:11:35.556 3.311 - 3.325: 0.6051% ( 51) 00:11:35.556 3.325 - 3.339: 2.1210% ( 248) 00:11:35.556 3.339 - 3.353: 6.1980% ( 667) 00:11:35.556 3.353 - 3.367: 11.8949% ( 932) 00:11:35.556 3.367 - 3.381: 17.3533% ( 893) 00:11:35.556 3.381 - 3.395: 23.7775% ( 1051) 00:11:35.556 3.395 - 3.409: 29.9939% ( 1017) 00:11:35.556 3.409 - 3.423: 35.3790% ( 881) 00:11:35.556 3.423 - 3.437: 41.1064% ( 937) 00:11:35.556 3.437 - 3.450: 45.8619% ( 778) 00:11:35.556 3.450 - 3.464: 49.7372% ( 634) 00:11:35.556 3.464 - 3.478: 54.6088% ( 797) 00:11:35.556 3.478 - 3.492: 61.6870% ( 1158) 00:11:35.556 3.492 - 3.506: 68.0685% ( 1044) 00:11:35.556 3.506 - 3.520: 72.3716% ( 704) 00:11:35.556 3.520 - 3.534: 77.2188% ( 793) 00:11:35.556 3.534 - 3.548: 81.6381% ( 723) 00:11:35.556 3.548 - 3.562: 84.3643% ( 446) 00:11:35.556 3.562 - 3.590: 86.9743% ( 427) 00:11:35.556 3.590 - 3.617: 87.7873% ( 133) 00:11:35.556 3.617 - 3.645: 88.7408% ( 156) 00:11:35.556 3.645 - 3.673: 90.2262% ( 243) 00:11:35.556 3.673 - 3.701: 91.8704% ( 269) 00:11:35.556 3.701 - 3.729: 93.6430% ( 290) 00:11:35.556 3.729 - 3.757: 95.6235% ( 324) 00:11:35.556 3.757 - 3.784: 97.0171% ( 228) 00:11:35.556 3.784 - 3.812: 98.1663% ( 188) 00:11:35.556 3.812 - 3.840: 98.8631% ( 114) 00:11:35.556 3.840 - 3.868: 99.2359% ( 61) 00:11:35.556 3.868 - 3.896: 99.4988% ( 43) 00:11:35.556 3.896 - 3.923: 99.6027% ( 17) 00:11:35.556 3.923 - 3.951: 99.6210% ( 3) 00:11:35.556 3.951 - 3.979: 99.6271% ( 1) 00:11:35.556 5.315 - 5.343: 99.6333% ( 1) 00:11:35.556 5.370 - 5.398: 99.6394% ( 1) 00:11:35.556 5.454 - 5.482: 99.6516% ( 2) 00:11:35.556 5.510 - 5.537: 99.6638% ( 2) 00:11:35.556 5.621 - 5.649: 99.6760% ( 2) 00:11:35.556 5.760 - 5.788: 99.6883% ( 2) 00:11:35.556 5.788 - 5.816: 99.7005% ( 2) 00:11:35.557 5.816 - 5.843: 99.7127% ( 2) 00:11:35.557 5.843 - 5.871: 99.7188% ( 1) 00:11:35.557 5.871 - 5.899: 99.7249% ( 1) 00:11:35.557 5.955 - 5.983: 99.7372% ( 2) 00:11:35.557 6.094 - 6.122: 99.7433% ( 1) 00:11:35.557 6.150 - 6.177: 99.7494% ( 1) 00:11:35.557 6.734 - 6.762: 99.7555% ( 1) 00:11:35.557 6.790 - 6.817: 99.7616% ( 1) 00:11:35.557 6.817 - 6.845: 99.7677% ( 1) 00:11:35.557 7.012 - 7.040: 99.7738% ( 1) 00:11:35.557 7.040 - 7.068: 99.7922% ( 3) 00:11:35.557 7.235 - 7.290: 99.7983% ( 1) 00:11:35.557 7.513 - 7.569: 99.8044% ( 1) 00:11:35.557 7.569 - 7.624: 99.8105% ( 1) 00:11:35.557 7.624 - 7.680: 99.8166% ( 1) 00:11:35.557 7.680 - 7.736: 99.8411% ( 4) 00:11:35.557 7.736 - 7.791: 99.8472% ( 1) 00:11:35.557 8.125 - 8.181: 99.8594% ( 2) 00:11:35.557 8.292 - 8.348: 99.8655% ( 1) 00:11:35.557 8.403 - 8.459: 99.8716% ( 1) 00:11:35.557 8.459 - 8.515: 99.8778% ( 1) 00:11:35.557 8.515 - 8.570: 99.8839% ( 1) 00:11:35.557 8.904 - 8.960: 99.8900% ( 1) 00:11:35.557 8.960 - 9.016: 99.8961% ( 1) 00:11:35.557 9.127 - 9.183: 99.9022% ( 1) 00:11:35.557 3989.148 - 4017.642: 99.9939% ( 15) 00:11:35.557 4986.435 - 5014.929: 100.0000% ( 1) 00:11:35.557 00:11:35.557 Complete histogram 00:11:35.557 ================== 00:11:35.557 Range in us Cumulative Count 00:11:35.557 1.774 - 1.781: 0.0122% ( 2) 00:11:35.557 1.781 - 1.795: 0.0367% ( 4) 00:11:35.557 1.795 - 1.809: 0.2017% ( 27) 00:11:35.557 1.809 - 1.823: 1.1675% ( 158) 00:11:35.557 1.823 - [2024-07-15 20:37:09.934381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:35.557 1.837: 2.9156% ( 286) 00:11:35.557 1.837 - 1.850: 4.4560% ( 252) 00:11:35.557 1.850 - 1.864: 20.7335% ( 2663) 00:11:35.557 1.864 - 1.878: 75.2689% ( 8922) 00:11:35.557 1.878 - 1.892: 92.0293% ( 2742) 00:11:35.557 1.892 - 1.906: 95.2506% ( 527) 00:11:35.557 1.906 - 1.920: 96.5403% ( 211) 00:11:35.557 1.920 - 1.934: 97.0844% ( 89) 00:11:35.557 1.934 - 1.948: 98.2579% ( 192) 00:11:35.557 1.948 - 1.962: 99.0770% ( 134) 00:11:35.557 1.962 - 1.976: 99.2726% ( 32) 00:11:35.557 1.976 - 1.990: 99.3154% ( 7) 00:11:35.557 1.990 - 2.003: 99.3399% ( 4) 00:11:35.557 2.003 - 2.017: 99.3582% ( 3) 00:11:35.557 2.017 - 2.031: 99.3704% ( 2) 00:11:35.557 2.031 - 2.045: 99.3765% ( 1) 00:11:35.557 2.045 - 2.059: 99.3826% ( 1) 00:11:35.557 3.784 - 3.812: 99.3888% ( 1) 00:11:35.557 3.840 - 3.868: 99.3949% ( 1) 00:11:35.557 4.174 - 4.202: 99.4010% ( 1) 00:11:35.557 4.313 - 4.341: 99.4071% ( 1) 00:11:35.557 4.424 - 4.452: 99.4132% ( 1) 00:11:35.557 4.452 - 4.480: 99.4193% ( 1) 00:11:35.557 4.591 - 4.619: 99.4254% ( 1) 00:11:35.557 4.897 - 4.925: 99.4315% ( 1) 00:11:35.557 5.092 - 5.120: 99.4377% ( 1) 00:11:35.557 5.398 - 5.426: 99.4438% ( 1) 00:11:35.557 5.565 - 5.593: 99.4499% ( 1) 00:11:35.557 5.788 - 5.816: 99.4560% ( 1) 00:11:35.557 5.871 - 5.899: 99.4621% ( 1) 00:11:35.557 5.955 - 5.983: 99.4682% ( 1) 00:11:35.557 6.122 - 6.150: 99.4743% ( 1) 00:11:35.557 6.372 - 6.400: 99.4804% ( 1) 00:11:35.557 6.539 - 6.567: 99.4866% ( 1) 00:11:35.557 6.650 - 6.678: 99.4927% ( 1) 00:11:35.557 6.706 - 6.734: 99.4988% ( 1) 00:11:35.557 6.873 - 6.901: 99.5049% ( 1) 00:11:35.557 7.123 - 7.179: 99.5110% ( 1) 00:11:35.557 7.290 - 7.346: 99.5171% ( 1) 00:11:35.557 12.188 - 12.243: 99.5232% ( 1) 00:11:35.557 40.070 - 40.292: 99.5293% ( 1) 00:11:35.557 2963.367 - 2977.614: 99.5355% ( 1) 00:11:35.557 3989.148 - 4017.642: 100.0000% ( 76) 00:11:35.557 00:11:35.557 20:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:35.557 20:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:35.557 20:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:35.557 20:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:35.557 20:37:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:35.817 [ 00:11:35.817 { 00:11:35.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:35.817 "subtype": "Discovery", 00:11:35.817 "listen_addresses": [], 00:11:35.817 "allow_any_host": true, 00:11:35.817 "hosts": [] 00:11:35.817 }, 00:11:35.817 { 00:11:35.817 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:35.817 "subtype": "NVMe", 00:11:35.817 "listen_addresses": [ 00:11:35.817 { 00:11:35.817 "trtype": "VFIOUSER", 00:11:35.817 "adrfam": "IPv4", 00:11:35.817 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:35.817 "trsvcid": "0" 00:11:35.817 } 00:11:35.817 ], 00:11:35.817 "allow_any_host": true, 00:11:35.817 "hosts": [], 00:11:35.817 "serial_number": "SPDK1", 00:11:35.817 "model_number": "SPDK bdev Controller", 00:11:35.817 "max_namespaces": 32, 00:11:35.817 "min_cntlid": 1, 00:11:35.817 "max_cntlid": 65519, 00:11:35.817 "namespaces": [ 00:11:35.817 { 00:11:35.817 "nsid": 1, 00:11:35.817 "bdev_name": "Malloc1", 00:11:35.817 "name": "Malloc1", 00:11:35.817 "nguid": "83046F14443E46208A46B41BD47BA08E", 00:11:35.817 "uuid": "83046f14-443e-4620-8a46-b41bd47ba08e" 00:11:35.817 } 00:11:35.817 ] 00:11:35.817 }, 00:11:35.817 { 00:11:35.817 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:35.817 "subtype": "NVMe", 00:11:35.817 "listen_addresses": [ 00:11:35.817 { 00:11:35.817 "trtype": "VFIOUSER", 00:11:35.817 "adrfam": "IPv4", 00:11:35.817 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:35.817 "trsvcid": "0" 00:11:35.817 } 00:11:35.817 ], 00:11:35.817 "allow_any_host": true, 00:11:35.817 "hosts": [], 00:11:35.817 "serial_number": "SPDK2", 00:11:35.817 "model_number": "SPDK bdev Controller", 00:11:35.817 "max_namespaces": 32, 00:11:35.817 "min_cntlid": 1, 00:11:35.817 "max_cntlid": 65519, 00:11:35.817 "namespaces": [ 00:11:35.817 { 00:11:35.817 "nsid": 1, 00:11:35.817 "bdev_name": "Malloc2", 00:11:35.817 "name": "Malloc2", 00:11:35.817 "nguid": "43BF26B989F94A27BE254C435622CD37", 00:11:35.817 "uuid": "43bf26b9-89f9-4a27-be25-4c435622cd37" 00:11:35.817 } 00:11:35.817 ] 00:11:35.817 } 00:11:35.817 ] 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2613883 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:35.817 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:35.817 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.076 [2024-07-15 20:37:10.319722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:36.076 Malloc3 00:11:36.076 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:36.076 [2024-07-15 20:37:10.522232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:36.076 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:36.076 Asynchronous Event Request test 00:11:36.076 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:36.076 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:36.076 Registering asynchronous event callbacks... 00:11:36.076 Starting namespace attribute notice tests for all controllers... 00:11:36.076 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:36.076 aer_cb - Changed Namespace 00:11:36.076 Cleaning up... 00:11:36.336 [ 00:11:36.336 { 00:11:36.336 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:36.336 "subtype": "Discovery", 00:11:36.336 "listen_addresses": [], 00:11:36.336 "allow_any_host": true, 00:11:36.336 "hosts": [] 00:11:36.336 }, 00:11:36.336 { 00:11:36.336 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:36.336 "subtype": "NVMe", 00:11:36.336 "listen_addresses": [ 00:11:36.336 { 00:11:36.336 "trtype": "VFIOUSER", 00:11:36.336 "adrfam": "IPv4", 00:11:36.336 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:36.336 "trsvcid": "0" 00:11:36.336 } 00:11:36.336 ], 00:11:36.336 "allow_any_host": true, 00:11:36.336 "hosts": [], 00:11:36.336 "serial_number": "SPDK1", 00:11:36.336 "model_number": "SPDK bdev Controller", 00:11:36.336 "max_namespaces": 32, 00:11:36.336 "min_cntlid": 1, 00:11:36.336 "max_cntlid": 65519, 00:11:36.336 "namespaces": [ 00:11:36.336 { 00:11:36.336 "nsid": 1, 00:11:36.336 "bdev_name": "Malloc1", 00:11:36.336 "name": "Malloc1", 00:11:36.336 "nguid": "83046F14443E46208A46B41BD47BA08E", 00:11:36.336 "uuid": "83046f14-443e-4620-8a46-b41bd47ba08e" 00:11:36.336 }, 00:11:36.336 { 00:11:36.336 "nsid": 2, 00:11:36.336 "bdev_name": "Malloc3", 00:11:36.336 "name": "Malloc3", 00:11:36.336 "nguid": "93596A67764D4CD19DC9DD3333D8D609", 00:11:36.336 "uuid": "93596a67-764d-4cd1-9dc9-dd3333d8d609" 00:11:36.336 } 00:11:36.336 ] 00:11:36.336 }, 00:11:36.336 { 00:11:36.336 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:36.336 "subtype": "NVMe", 00:11:36.336 "listen_addresses": [ 00:11:36.336 { 00:11:36.336 "trtype": "VFIOUSER", 00:11:36.336 "adrfam": "IPv4", 00:11:36.336 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:36.336 "trsvcid": "0" 00:11:36.336 } 00:11:36.336 ], 00:11:36.336 "allow_any_host": true, 00:11:36.336 "hosts": [], 00:11:36.336 "serial_number": "SPDK2", 00:11:36.336 "model_number": "SPDK bdev Controller", 00:11:36.336 "max_namespaces": 32, 00:11:36.336 "min_cntlid": 1, 00:11:36.336 "max_cntlid": 65519, 00:11:36.336 "namespaces": [ 00:11:36.336 { 00:11:36.336 "nsid": 1, 00:11:36.336 "bdev_name": "Malloc2", 00:11:36.336 "name": "Malloc2", 00:11:36.336 "nguid": "43BF26B989F94A27BE254C435622CD37", 00:11:36.336 "uuid": "43bf26b9-89f9-4a27-be25-4c435622cd37" 00:11:36.336 } 00:11:36.336 ] 00:11:36.336 } 00:11:36.336 ] 00:11:36.336 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2613883 00:11:36.336 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:36.336 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:36.336 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:36.336 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:36.336 [2024-07-15 20:37:10.752103] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:36.336 [2024-07-15 20:37:10.752148] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2613896 ] 00:11:36.336 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.336 [2024-07-15 20:37:10.782625] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:36.336 [2024-07-15 20:37:10.785110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:36.336 [2024-07-15 20:37:10.785130] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f52fdc7b000 00:11:36.336 [2024-07-15 20:37:10.786114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.336 [2024-07-15 20:37:10.787120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.336 [2024-07-15 20:37:10.788128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.336 [2024-07-15 20:37:10.789139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.336 [2024-07-15 20:37:10.790146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.336 [2024-07-15 20:37:10.791157] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.336 [2024-07-15 20:37:10.792161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.336 [2024-07-15 20:37:10.793164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.336 [2024-07-15 20:37:10.794172] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:36.336 [2024-07-15 20:37:10.794181] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f52fdc70000 00:11:36.336 [2024-07-15 20:37:10.795121] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:36.336 [2024-07-15 20:37:10.803648] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:36.336 [2024-07-15 20:37:10.803675] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:36.336 [2024-07-15 20:37:10.808745] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:36.336 [2024-07-15 20:37:10.808782] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:36.336 [2024-07-15 20:37:10.808847] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:36.336 [2024-07-15 20:37:10.808861] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:36.336 [2024-07-15 20:37:10.808866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:36.337 [2024-07-15 20:37:10.809748] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:36.337 [2024-07-15 20:37:10.809757] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:36.337 [2024-07-15 20:37:10.809763] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:36.337 [2024-07-15 20:37:10.810758] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:36.337 [2024-07-15 20:37:10.810767] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:36.337 [2024-07-15 20:37:10.810774] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:36.337 [2024-07-15 20:37:10.811761] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:36.337 [2024-07-15 20:37:10.811769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:36.337 [2024-07-15 20:37:10.812765] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:36.337 [2024-07-15 20:37:10.812774] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:36.337 [2024-07-15 20:37:10.812778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:36.337 [2024-07-15 20:37:10.812784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:36.337 [2024-07-15 20:37:10.812889] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:36.337 [2024-07-15 20:37:10.812893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:36.337 [2024-07-15 20:37:10.812898] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:36.337 [2024-07-15 20:37:10.813776] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:36.337 [2024-07-15 20:37:10.814790] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:36.337 [2024-07-15 20:37:10.815804] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:36.337 [2024-07-15 20:37:10.816805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:36.337 [2024-07-15 20:37:10.816844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:36.337 [2024-07-15 20:37:10.817810] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:36.337 [2024-07-15 20:37:10.817818] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:36.337 [2024-07-15 20:37:10.817822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:36.337 [2024-07-15 20:37:10.817839] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:36.337 [2024-07-15 20:37:10.817849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:36.337 [2024-07-15 20:37:10.817860] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.337 [2024-07-15 20:37:10.817864] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.337 [2024-07-15 20:37:10.817874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.597 [2024-07-15 20:37:10.825232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:36.597 [2024-07-15 20:37:10.825243] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:36.597 [2024-07-15 20:37:10.825250] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:36.597 [2024-07-15 20:37:10.825254] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:36.597 [2024-07-15 20:37:10.825258] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:36.597 [2024-07-15 20:37:10.825263] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:36.597 [2024-07-15 20:37:10.825267] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:36.597 [2024-07-15 20:37:10.825271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:36.597 [2024-07-15 20:37:10.825278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:36.597 [2024-07-15 20:37:10.825287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:36.597 [2024-07-15 20:37:10.833230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:36.597 [2024-07-15 20:37:10.833243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.597 [2024-07-15 20:37:10.833250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.597 [2024-07-15 20:37:10.833257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.597 [2024-07-15 20:37:10.833265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.597 [2024-07-15 20:37:10.833271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:36.597 [2024-07-15 20:37:10.833279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:36.597 [2024-07-15 20:37:10.833287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:36.597 [2024-07-15 20:37:10.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:36.597 [2024-07-15 20:37:10.841236] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:36.597 [2024-07-15 20:37:10.841240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:36.597 [2024-07-15 20:37:10.841246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:36.597 [2024-07-15 20:37:10.841251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:36.597 [2024-07-15 20:37:10.841259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.849228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.849279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.849286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.849294] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:36.598 [2024-07-15 20:37:10.849298] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:36.598 [2024-07-15 20:37:10.849304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.857231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.857245] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:36.598 [2024-07-15 20:37:10.857254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.857261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.857268] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.598 [2024-07-15 20:37:10.857272] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.598 [2024-07-15 20:37:10.857278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.865230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.865244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.865251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.865260] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.598 [2024-07-15 20:37:10.865264] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.598 [2024-07-15 20:37:10.865270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.873229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.873238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.873244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.873253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.873259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.873263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.873267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.873272] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:36.598 [2024-07-15 20:37:10.873276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:36.598 [2024-07-15 20:37:10.873280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:36.598 [2024-07-15 20:37:10.873296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.881229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.881242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.889231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.889242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.897228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.897241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.905229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.905250] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:36.598 [2024-07-15 20:37:10.905255] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:36.598 [2024-07-15 20:37:10.905258] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:36.598 [2024-07-15 20:37:10.905261] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:36.598 [2024-07-15 20:37:10.905267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:36.598 [2024-07-15 20:37:10.905276] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:36.598 [2024-07-15 20:37:10.905280] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:36.598 [2024-07-15 20:37:10.905285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.905292] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:36.598 [2024-07-15 20:37:10.905295] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.598 [2024-07-15 20:37:10.905301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.905307] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:36.598 [2024-07-15 20:37:10.905311] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:36.598 [2024-07-15 20:37:10.905316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:36.598 [2024-07-15 20:37:10.913230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.913244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.913254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:36.598 [2024-07-15 20:37:10.913260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:36.598 ===================================================== 00:11:36.598 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:36.598 ===================================================== 00:11:36.598 Controller Capabilities/Features 00:11:36.598 ================================ 00:11:36.598 Vendor ID: 4e58 00:11:36.598 Subsystem Vendor ID: 4e58 00:11:36.598 Serial Number: SPDK2 00:11:36.598 Model Number: SPDK bdev Controller 00:11:36.598 Firmware Version: 24.09 00:11:36.598 Recommended Arb Burst: 6 00:11:36.598 IEEE OUI Identifier: 8d 6b 50 00:11:36.598 Multi-path I/O 00:11:36.598 May have multiple subsystem ports: Yes 00:11:36.598 May have multiple controllers: Yes 00:11:36.598 Associated with SR-IOV VF: No 00:11:36.598 Max Data Transfer Size: 131072 00:11:36.598 Max Number of Namespaces: 32 00:11:36.598 Max Number of I/O Queues: 127 00:11:36.598 NVMe Specification Version (VS): 1.3 00:11:36.598 NVMe Specification Version (Identify): 1.3 00:11:36.598 Maximum Queue Entries: 256 00:11:36.598 Contiguous Queues Required: Yes 00:11:36.598 Arbitration Mechanisms Supported 00:11:36.598 Weighted Round Robin: Not Supported 00:11:36.598 Vendor Specific: Not Supported 00:11:36.598 Reset Timeout: 15000 ms 00:11:36.598 Doorbell Stride: 4 bytes 00:11:36.598 NVM Subsystem Reset: Not Supported 00:11:36.598 Command Sets Supported 00:11:36.598 NVM Command Set: Supported 00:11:36.598 Boot Partition: Not Supported 00:11:36.598 Memory Page Size Minimum: 4096 bytes 00:11:36.598 Memory Page Size Maximum: 4096 bytes 00:11:36.598 Persistent Memory Region: Not Supported 00:11:36.599 Optional Asynchronous Events Supported 00:11:36.599 Namespace Attribute Notices: Supported 00:11:36.599 Firmware Activation Notices: Not Supported 00:11:36.599 ANA Change Notices: Not Supported 00:11:36.599 PLE Aggregate Log Change Notices: Not Supported 00:11:36.599 LBA Status Info Alert Notices: Not Supported 00:11:36.599 EGE Aggregate Log Change Notices: Not Supported 00:11:36.599 Normal NVM Subsystem Shutdown event: Not Supported 00:11:36.599 Zone Descriptor Change Notices: Not Supported 00:11:36.599 Discovery Log Change Notices: Not Supported 00:11:36.599 Controller Attributes 00:11:36.599 128-bit Host Identifier: Supported 00:11:36.599 Non-Operational Permissive Mode: Not Supported 00:11:36.599 NVM Sets: Not Supported 00:11:36.599 Read Recovery Levels: Not Supported 00:11:36.599 Endurance Groups: Not Supported 00:11:36.599 Predictable Latency Mode: Not Supported 00:11:36.599 Traffic Based Keep ALive: Not Supported 00:11:36.599 Namespace Granularity: Not Supported 00:11:36.599 SQ Associations: Not Supported 00:11:36.599 UUID List: Not Supported 00:11:36.599 Multi-Domain Subsystem: Not Supported 00:11:36.599 Fixed Capacity Management: Not Supported 00:11:36.599 Variable Capacity Management: Not Supported 00:11:36.599 Delete Endurance Group: Not Supported 00:11:36.599 Delete NVM Set: Not Supported 00:11:36.599 Extended LBA Formats Supported: Not Supported 00:11:36.599 Flexible Data Placement Supported: Not Supported 00:11:36.599 00:11:36.599 Controller Memory Buffer Support 00:11:36.599 ================================ 00:11:36.599 Supported: No 00:11:36.599 00:11:36.599 Persistent Memory Region Support 00:11:36.599 ================================ 00:11:36.599 Supported: No 00:11:36.599 00:11:36.599 Admin Command Set Attributes 00:11:36.599 ============================ 00:11:36.599 Security Send/Receive: Not Supported 00:11:36.599 Format NVM: Not Supported 00:11:36.599 Firmware Activate/Download: Not Supported 00:11:36.599 Namespace Management: Not Supported 00:11:36.599 Device Self-Test: Not Supported 00:11:36.599 Directives: Not Supported 00:11:36.599 NVMe-MI: Not Supported 00:11:36.599 Virtualization Management: Not Supported 00:11:36.599 Doorbell Buffer Config: Not Supported 00:11:36.599 Get LBA Status Capability: Not Supported 00:11:36.599 Command & Feature Lockdown Capability: Not Supported 00:11:36.599 Abort Command Limit: 4 00:11:36.599 Async Event Request Limit: 4 00:11:36.599 Number of Firmware Slots: N/A 00:11:36.599 Firmware Slot 1 Read-Only: N/A 00:11:36.599 Firmware Activation Without Reset: N/A 00:11:36.599 Multiple Update Detection Support: N/A 00:11:36.599 Firmware Update Granularity: No Information Provided 00:11:36.599 Per-Namespace SMART Log: No 00:11:36.599 Asymmetric Namespace Access Log Page: Not Supported 00:11:36.599 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:36.599 Command Effects Log Page: Supported 00:11:36.599 Get Log Page Extended Data: Supported 00:11:36.599 Telemetry Log Pages: Not Supported 00:11:36.599 Persistent Event Log Pages: Not Supported 00:11:36.599 Supported Log Pages Log Page: May Support 00:11:36.599 Commands Supported & Effects Log Page: Not Supported 00:11:36.599 Feature Identifiers & Effects Log Page:May Support 00:11:36.599 NVMe-MI Commands & Effects Log Page: May Support 00:11:36.599 Data Area 4 for Telemetry Log: Not Supported 00:11:36.599 Error Log Page Entries Supported: 128 00:11:36.599 Keep Alive: Supported 00:11:36.599 Keep Alive Granularity: 10000 ms 00:11:36.599 00:11:36.599 NVM Command Set Attributes 00:11:36.599 ========================== 00:11:36.599 Submission Queue Entry Size 00:11:36.599 Max: 64 00:11:36.599 Min: 64 00:11:36.599 Completion Queue Entry Size 00:11:36.599 Max: 16 00:11:36.599 Min: 16 00:11:36.599 Number of Namespaces: 32 00:11:36.599 Compare Command: Supported 00:11:36.599 Write Uncorrectable Command: Not Supported 00:11:36.599 Dataset Management Command: Supported 00:11:36.599 Write Zeroes Command: Supported 00:11:36.599 Set Features Save Field: Not Supported 00:11:36.599 Reservations: Not Supported 00:11:36.599 Timestamp: Not Supported 00:11:36.599 Copy: Supported 00:11:36.599 Volatile Write Cache: Present 00:11:36.599 Atomic Write Unit (Normal): 1 00:11:36.599 Atomic Write Unit (PFail): 1 00:11:36.599 Atomic Compare & Write Unit: 1 00:11:36.599 Fused Compare & Write: Supported 00:11:36.599 Scatter-Gather List 00:11:36.599 SGL Command Set: Supported (Dword aligned) 00:11:36.599 SGL Keyed: Not Supported 00:11:36.599 SGL Bit Bucket Descriptor: Not Supported 00:11:36.599 SGL Metadata Pointer: Not Supported 00:11:36.599 Oversized SGL: Not Supported 00:11:36.599 SGL Metadata Address: Not Supported 00:11:36.599 SGL Offset: Not Supported 00:11:36.599 Transport SGL Data Block: Not Supported 00:11:36.599 Replay Protected Memory Block: Not Supported 00:11:36.599 00:11:36.599 Firmware Slot Information 00:11:36.599 ========================= 00:11:36.599 Active slot: 1 00:11:36.599 Slot 1 Firmware Revision: 24.09 00:11:36.599 00:11:36.599 00:11:36.599 Commands Supported and Effects 00:11:36.599 ============================== 00:11:36.599 Admin Commands 00:11:36.599 -------------- 00:11:36.599 Get Log Page (02h): Supported 00:11:36.599 Identify (06h): Supported 00:11:36.599 Abort (08h): Supported 00:11:36.599 Set Features (09h): Supported 00:11:36.599 Get Features (0Ah): Supported 00:11:36.599 Asynchronous Event Request (0Ch): Supported 00:11:36.599 Keep Alive (18h): Supported 00:11:36.599 I/O Commands 00:11:36.599 ------------ 00:11:36.599 Flush (00h): Supported LBA-Change 00:11:36.599 Write (01h): Supported LBA-Change 00:11:36.599 Read (02h): Supported 00:11:36.599 Compare (05h): Supported 00:11:36.599 Write Zeroes (08h): Supported LBA-Change 00:11:36.599 Dataset Management (09h): Supported LBA-Change 00:11:36.599 Copy (19h): Supported LBA-Change 00:11:36.599 00:11:36.599 Error Log 00:11:36.599 ========= 00:11:36.599 00:11:36.599 Arbitration 00:11:36.599 =========== 00:11:36.599 Arbitration Burst: 1 00:11:36.599 00:11:36.599 Power Management 00:11:36.599 ================ 00:11:36.599 Number of Power States: 1 00:11:36.599 Current Power State: Power State #0 00:11:36.599 Power State #0: 00:11:36.599 Max Power: 0.00 W 00:11:36.599 Non-Operational State: Operational 00:11:36.599 Entry Latency: Not Reported 00:11:36.599 Exit Latency: Not Reported 00:11:36.600 Relative Read Throughput: 0 00:11:36.600 Relative Read Latency: 0 00:11:36.600 Relative Write Throughput: 0 00:11:36.600 Relative Write Latency: 0 00:11:36.600 Idle Power: Not Reported 00:11:36.600 Active Power: Not Reported 00:11:36.600 Non-Operational Permissive Mode: Not Supported 00:11:36.600 00:11:36.600 Health Information 00:11:36.600 ================== 00:11:36.600 Critical Warnings: 00:11:36.600 Available Spare Space: OK 00:11:36.600 Temperature: OK 00:11:36.600 Device Reliability: OK 00:11:36.600 Read Only: No 00:11:36.600 Volatile Memory Backup: OK 00:11:36.600 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:36.600 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:36.600 Available Spare: 0% 00:11:36.600 Available Sp[2024-07-15 20:37:10.913346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:36.600 [2024-07-15 20:37:10.921233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:36.600 [2024-07-15 20:37:10.921266] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:36.600 [2024-07-15 20:37:10.921275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.600 [2024-07-15 20:37:10.921281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.600 [2024-07-15 20:37:10.921286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.600 [2024-07-15 20:37:10.921292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.600 [2024-07-15 20:37:10.921342] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:36.600 [2024-07-15 20:37:10.921353] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:36.600 [2024-07-15 20:37:10.922347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:36.600 [2024-07-15 20:37:10.922391] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:36.600 [2024-07-15 20:37:10.922397] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:36.600 [2024-07-15 20:37:10.923351] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:36.600 [2024-07-15 20:37:10.923362] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:36.600 [2024-07-15 20:37:10.923411] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:36.600 [2024-07-15 20:37:10.924390] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:36.600 are Threshold: 0% 00:11:36.600 Life Percentage Used: 0% 00:11:36.600 Data Units Read: 0 00:11:36.600 Data Units Written: 0 00:11:36.600 Host Read Commands: 0 00:11:36.600 Host Write Commands: 0 00:11:36.600 Controller Busy Time: 0 minutes 00:11:36.600 Power Cycles: 0 00:11:36.600 Power On Hours: 0 hours 00:11:36.600 Unsafe Shutdowns: 0 00:11:36.600 Unrecoverable Media Errors: 0 00:11:36.600 Lifetime Error Log Entries: 0 00:11:36.600 Warning Temperature Time: 0 minutes 00:11:36.600 Critical Temperature Time: 0 minutes 00:11:36.600 00:11:36.600 Number of Queues 00:11:36.600 ================ 00:11:36.600 Number of I/O Submission Queues: 127 00:11:36.600 Number of I/O Completion Queues: 127 00:11:36.600 00:11:36.600 Active Namespaces 00:11:36.600 ================= 00:11:36.600 Namespace ID:1 00:11:36.600 Error Recovery Timeout: Unlimited 00:11:36.600 Command Set Identifier: NVM (00h) 00:11:36.600 Deallocate: Supported 00:11:36.600 Deallocated/Unwritten Error: Not Supported 00:11:36.600 Deallocated Read Value: Unknown 00:11:36.600 Deallocate in Write Zeroes: Not Supported 00:11:36.600 Deallocated Guard Field: 0xFFFF 00:11:36.600 Flush: Supported 00:11:36.600 Reservation: Supported 00:11:36.600 Namespace Sharing Capabilities: Multiple Controllers 00:11:36.600 Size (in LBAs): 131072 (0GiB) 00:11:36.600 Capacity (in LBAs): 131072 (0GiB) 00:11:36.600 Utilization (in LBAs): 131072 (0GiB) 00:11:36.600 NGUID: 43BF26B989F94A27BE254C435622CD37 00:11:36.600 UUID: 43bf26b9-89f9-4a27-be25-4c435622cd37 00:11:36.600 Thin Provisioning: Not Supported 00:11:36.600 Per-NS Atomic Units: Yes 00:11:36.600 Atomic Boundary Size (Normal): 0 00:11:36.600 Atomic Boundary Size (PFail): 0 00:11:36.600 Atomic Boundary Offset: 0 00:11:36.600 Maximum Single Source Range Length: 65535 00:11:36.600 Maximum Copy Length: 65535 00:11:36.600 Maximum Source Range Count: 1 00:11:36.600 NGUID/EUI64 Never Reused: No 00:11:36.600 Namespace Write Protected: No 00:11:36.600 Number of LBA Formats: 1 00:11:36.600 Current LBA Format: LBA Format #00 00:11:36.600 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:36.600 00:11:36.600 20:37:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:36.600 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.859 [2024-07-15 20:37:11.138583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:42.128 Initializing NVMe Controllers 00:11:42.128 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:42.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:42.128 Initialization complete. Launching workers. 00:11:42.128 ======================================================== 00:11:42.128 Latency(us) 00:11:42.128 Device Information : IOPS MiB/s Average min max 00:11:42.128 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39952.25 156.06 3203.66 967.77 7607.19 00:11:42.128 ======================================================== 00:11:42.128 Total : 39952.25 156.06 3203.66 967.77 7607.19 00:11:42.128 00:11:42.128 [2024-07-15 20:37:16.244467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:42.128 20:37:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:42.128 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.128 [2024-07-15 20:37:16.469140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:47.401 Initializing NVMe Controllers 00:11:47.401 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:47.401 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:47.401 Initialization complete. Launching workers. 00:11:47.401 ======================================================== 00:11:47.401 Latency(us) 00:11:47.401 Device Information : IOPS MiB/s Average min max 00:11:47.401 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39826.75 155.57 3213.69 968.69 7117.56 00:11:47.401 ======================================================== 00:11:47.401 Total : 39826.75 155.57 3213.69 968.69 7117.56 00:11:47.401 00:11:47.401 [2024-07-15 20:37:21.489554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:47.401 20:37:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:47.401 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.401 [2024-07-15 20:37:21.687785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:52.670 [2024-07-15 20:37:26.835314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:52.670 Initializing NVMe Controllers 00:11:52.670 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:52.670 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:52.670 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:52.670 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:52.670 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:52.670 Initialization complete. Launching workers. 00:11:52.670 Starting thread on core 2 00:11:52.670 Starting thread on core 3 00:11:52.670 Starting thread on core 1 00:11:52.670 20:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:52.670 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.670 [2024-07-15 20:37:27.118684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:56.931 [2024-07-15 20:37:30.736425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:56.931 Initializing NVMe Controllers 00:11:56.931 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:56.931 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:56.931 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:56.931 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:56.931 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:56.931 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:56.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:56.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:56.931 Initialization complete. Launching workers. 00:11:56.931 Starting thread on core 1 with urgent priority queue 00:11:56.931 Starting thread on core 2 with urgent priority queue 00:11:56.931 Starting thread on core 3 with urgent priority queue 00:11:56.931 Starting thread on core 0 with urgent priority queue 00:11:56.931 SPDK bdev Controller (SPDK2 ) core 0: 3789.33 IO/s 26.39 secs/100000 ios 00:11:56.931 SPDK bdev Controller (SPDK2 ) core 1: 3752.33 IO/s 26.65 secs/100000 ios 00:11:56.931 SPDK bdev Controller (SPDK2 ) core 2: 5709.00 IO/s 17.52 secs/100000 ios 00:11:56.931 SPDK bdev Controller (SPDK2 ) core 3: 2910.67 IO/s 34.36 secs/100000 ios 00:11:56.931 ======================================================== 00:11:56.931 00:11:56.931 20:37:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:56.931 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.931 [2024-07-15 20:37:31.008171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:56.931 Initializing NVMe Controllers 00:11:56.931 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:56.931 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:56.931 Namespace ID: 1 size: 0GB 00:11:56.931 Initialization complete. 00:11:56.931 INFO: using host memory buffer for IO 00:11:56.931 Hello world! 00:11:56.931 [2024-07-15 20:37:31.018243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:56.931 20:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:56.931 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.931 [2024-07-15 20:37:31.289172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:58.309 Initializing NVMe Controllers 00:11:58.309 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:58.309 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:58.309 Initialization complete. Launching workers. 00:11:58.309 submit (in ns) avg, min, max = 7662.5, 3226.1, 4000474.8 00:11:58.309 complete (in ns) avg, min, max = 19567.8, 1775.7, 4148251.3 00:11:58.309 00:11:58.309 Submit histogram 00:11:58.309 ================ 00:11:58.309 Range in us Cumulative Count 00:11:58.309 3.214 - 3.228: 0.0061% ( 1) 00:11:58.309 3.228 - 3.242: 0.0307% ( 4) 00:11:58.309 3.242 - 3.256: 0.0799% ( 8) 00:11:58.309 3.256 - 3.270: 0.1229% ( 7) 00:11:58.309 3.270 - 3.283: 0.2027% ( 13) 00:11:58.309 3.283 - 3.297: 0.3194% ( 19) 00:11:58.309 3.297 - 3.311: 0.7310% ( 67) 00:11:58.309 3.311 - 3.325: 2.2666% ( 250) 00:11:58.309 3.325 - 3.339: 5.8477% ( 583) 00:11:58.309 3.339 - 3.353: 10.1966% ( 708) 00:11:58.309 3.353 - 3.367: 15.4300% ( 852) 00:11:58.309 3.367 - 3.381: 21.0749% ( 919) 00:11:58.309 3.381 - 3.395: 26.7875% ( 930) 00:11:58.309 3.395 - 3.409: 32.2973% ( 897) 00:11:58.309 3.409 - 3.423: 37.7764% ( 892) 00:11:58.309 3.423 - 3.437: 42.9300% ( 839) 00:11:58.309 3.437 - 3.450: 46.9717% ( 658) 00:11:58.309 3.450 - 3.464: 51.0934% ( 671) 00:11:58.309 3.464 - 3.478: 57.0393% ( 968) 00:11:58.309 3.478 - 3.492: 62.2789% ( 853) 00:11:58.309 3.492 - 3.506: 66.6769% ( 716) 00:11:58.309 3.506 - 3.520: 71.9472% ( 858) 00:11:58.309 3.520 - 3.534: 76.9533% ( 815) 00:11:58.309 3.534 - 3.548: 80.2826% ( 542) 00:11:58.309 3.548 - 3.562: 82.9853% ( 440) 00:11:58.309 3.562 - 3.590: 86.1671% ( 518) 00:11:58.309 3.590 - 3.617: 87.5860% ( 231) 00:11:58.309 3.617 - 3.645: 88.8698% ( 209) 00:11:58.309 3.645 - 3.673: 90.5405% ( 272) 00:11:58.309 3.673 - 3.701: 92.1560% ( 263) 00:11:58.309 3.701 - 3.729: 93.8391% ( 274) 00:11:58.309 3.729 - 3.757: 95.5098% ( 272) 00:11:58.309 3.757 - 3.784: 97.0147% ( 245) 00:11:58.309 3.784 - 3.812: 97.8563% ( 137) 00:11:58.309 3.812 - 3.840: 98.7162% ( 140) 00:11:58.309 3.840 - 3.868: 99.0909% ( 61) 00:11:58.309 3.868 - 3.896: 99.3796% ( 47) 00:11:58.309 3.896 - 3.923: 99.4840% ( 17) 00:11:58.309 3.923 - 3.951: 99.5147% ( 5) 00:11:58.309 3.951 - 3.979: 99.5209% ( 1) 00:11:58.309 3.979 - 4.007: 99.5270% ( 1) 00:11:58.309 4.118 - 4.146: 99.5332% ( 1) 00:11:58.309 4.174 - 4.202: 99.5393% ( 1) 00:11:58.309 5.037 - 5.064: 99.5455% ( 1) 00:11:58.309 5.398 - 5.426: 99.5516% ( 1) 00:11:58.309 5.454 - 5.482: 99.5577% ( 1) 00:11:58.309 5.482 - 5.510: 99.5639% ( 1) 00:11:58.309 5.565 - 5.593: 99.5700% ( 1) 00:11:58.309 5.677 - 5.704: 99.5762% ( 1) 00:11:58.309 5.732 - 5.760: 99.5885% ( 2) 00:11:58.309 5.788 - 5.816: 99.6007% ( 2) 00:11:58.309 5.927 - 5.955: 99.6069% ( 1) 00:11:58.309 5.955 - 5.983: 99.6130% ( 1) 00:11:58.309 5.983 - 6.010: 99.6192% ( 1) 00:11:58.309 6.010 - 6.038: 99.6253% ( 1) 00:11:58.309 6.038 - 6.066: 99.6314% ( 1) 00:11:58.309 6.066 - 6.094: 99.6376% ( 1) 00:11:58.309 6.177 - 6.205: 99.6437% ( 1) 00:11:58.309 6.205 - 6.233: 99.6499% ( 1) 00:11:58.309 6.233 - 6.261: 99.6560% ( 1) 00:11:58.309 6.483 - 6.511: 99.6622% ( 1) 00:11:58.309 6.511 - 6.539: 99.6683% ( 1) 00:11:58.309 6.595 - 6.623: 99.6806% ( 2) 00:11:58.309 6.623 - 6.650: 99.6929% ( 2) 00:11:58.309 6.650 - 6.678: 99.7052% ( 2) 00:11:58.309 6.706 - 6.734: 99.7113% ( 1) 00:11:58.309 6.817 - 6.845: 99.7174% ( 1) 00:11:58.309 6.845 - 6.873: 99.7236% ( 1) 00:11:58.309 6.901 - 6.929: 99.7297% ( 1) 00:11:58.309 6.984 - 7.012: 99.7359% ( 1) 00:11:58.309 7.040 - 7.068: 99.7420% ( 1) 00:11:58.309 7.096 - 7.123: 99.7482% ( 1) 00:11:58.309 7.123 - 7.179: 99.7543% ( 1) 00:11:58.309 7.235 - 7.290: 99.7666% ( 2) 00:11:58.309 7.346 - 7.402: 99.7727% ( 1) 00:11:58.309 7.402 - 7.457: 99.7789% ( 1) 00:11:58.309 7.513 - 7.569: 99.7912% ( 2) 00:11:58.309 7.624 - 7.680: 99.8157% ( 4) 00:11:58.309 7.680 - 7.736: 99.8280% ( 2) 00:11:58.309 [2024-07-15 20:37:32.380311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:58.309 7.847 - 7.903: 99.8342% ( 1) 00:11:58.309 7.903 - 7.958: 99.8403% ( 1) 00:11:58.309 8.181 - 8.237: 99.8464% ( 1) 00:11:58.309 8.348 - 8.403: 99.8526% ( 1) 00:11:58.309 8.515 - 8.570: 99.8587% ( 1) 00:11:58.309 8.570 - 8.626: 99.8649% ( 1) 00:11:58.309 8.626 - 8.682: 99.8710% ( 1) 00:11:58.309 8.737 - 8.793: 99.8771% ( 1) 00:11:58.309 9.294 - 9.350: 99.8833% ( 1) 00:11:58.309 13.802 - 13.857: 99.8894% ( 1) 00:11:58.309 17.475 - 17.586: 99.8956% ( 1) 00:11:58.309 3989.148 - 4017.642: 100.0000% ( 17) 00:11:58.309 00:11:58.309 Complete histogram 00:11:58.309 ================== 00:11:58.309 Range in us Cumulative Count 00:11:58.309 1.774 - 1.781: 0.0123% ( 2) 00:11:58.309 1.781 - 1.795: 0.2334% ( 36) 00:11:58.309 1.795 - 1.809: 0.3931% ( 26) 00:11:58.309 1.809 - 1.823: 0.4484% ( 9) 00:11:58.309 1.823 - 1.837: 3.0713% ( 427) 00:11:58.309 1.837 - 1.850: 16.9165% ( 2254) 00:11:58.309 1.850 - 1.864: 22.0516% ( 836) 00:11:58.309 1.864 - 1.878: 27.0209% ( 809) 00:11:58.309 1.878 - 1.892: 65.6695% ( 6292) 00:11:58.309 1.892 - 1.906: 92.1007% ( 4303) 00:11:58.309 1.906 - 1.920: 96.2715% ( 679) 00:11:58.309 1.920 - 1.934: 97.2973% ( 167) 00:11:58.309 1.934 - 1.948: 97.7334% ( 71) 00:11:58.309 1.948 - 1.962: 98.4214% ( 112) 00:11:58.309 1.962 - 1.976: 98.9373% ( 84) 00:11:58.309 1.976 - 1.990: 99.1708% ( 38) 00:11:58.309 1.990 - 2.003: 99.2199% ( 8) 00:11:58.309 2.003 - 2.017: 99.2629% ( 7) 00:11:58.309 2.017 - 2.031: 99.3059% ( 7) 00:11:58.309 2.031 - 2.045: 99.3243% ( 3) 00:11:58.309 2.045 - 2.059: 99.3366% ( 2) 00:11:58.309 2.059 - 2.073: 99.3428% ( 1) 00:11:58.309 2.073 - 2.087: 99.3489% ( 1) 00:11:58.309 2.101 - 2.115: 99.3550% ( 1) 00:11:58.309 2.268 - 2.282: 99.3612% ( 1) 00:11:58.309 2.310 - 2.323: 99.3673% ( 1) 00:11:58.309 2.449 - 2.463: 99.3735% ( 1) 00:11:58.309 3.812 - 3.840: 99.3796% ( 1) 00:11:58.309 4.035 - 4.063: 99.3980% ( 3) 00:11:58.309 4.619 - 4.647: 99.4042% ( 1) 00:11:58.309 4.675 - 4.703: 99.4103% ( 1) 00:11:58.309 4.870 - 4.897: 99.4165% ( 1) 00:11:58.309 5.009 - 5.037: 99.4226% ( 1) 00:11:58.309 5.064 - 5.092: 99.4287% ( 1) 00:11:58.309 5.176 - 5.203: 99.4349% ( 1) 00:11:58.309 5.677 - 5.704: 99.4410% ( 1) 00:11:58.309 5.788 - 5.816: 99.4472% ( 1) 00:11:58.309 5.871 - 5.899: 99.4533% ( 1) 00:11:58.309 5.955 - 5.983: 99.4656% ( 2) 00:11:58.309 5.983 - 6.010: 99.4717% ( 1) 00:11:58.309 6.010 - 6.038: 99.4779% ( 1) 00:11:58.309 6.122 - 6.150: 99.4840% ( 1) 00:11:58.309 6.177 - 6.205: 99.4902% ( 1) 00:11:58.309 6.205 - 6.233: 99.4963% ( 1) 00:11:58.309 6.233 - 6.261: 99.5025% ( 1) 00:11:58.309 6.261 - 6.289: 99.5086% ( 1) 00:11:58.309 6.511 - 6.539: 99.5147% ( 1) 00:11:58.309 6.595 - 6.623: 99.5209% ( 1) 00:11:58.309 6.706 - 6.734: 99.5270% ( 1) 00:11:58.309 6.957 - 6.984: 99.5332% ( 1) 00:11:58.309 8.070 - 8.125: 99.5393% ( 1) 00:11:58.309 8.403 - 8.459: 99.5455% ( 1) 00:11:58.309 12.021 - 12.077: 99.5516% ( 1) 00:11:58.309 17.586 - 17.697: 99.5577% ( 1) 00:11:58.309 3989.148 - 4017.642: 99.9939% ( 71) 00:11:58.309 4131.617 - 4160.111: 100.0000% ( 1) 00:11:58.309 00:11:58.309 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:58.309 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:58.309 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:58.309 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:58.309 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:58.309 [ 00:11:58.309 { 00:11:58.309 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.309 "subtype": "Discovery", 00:11:58.309 "listen_addresses": [], 00:11:58.309 "allow_any_host": true, 00:11:58.309 "hosts": [] 00:11:58.309 }, 00:11:58.309 { 00:11:58.309 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:58.309 "subtype": "NVMe", 00:11:58.309 "listen_addresses": [ 00:11:58.309 { 00:11:58.309 "trtype": "VFIOUSER", 00:11:58.309 "adrfam": "IPv4", 00:11:58.309 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:58.309 "trsvcid": "0" 00:11:58.309 } 00:11:58.309 ], 00:11:58.309 "allow_any_host": true, 00:11:58.309 "hosts": [], 00:11:58.309 "serial_number": "SPDK1", 00:11:58.309 "model_number": "SPDK bdev Controller", 00:11:58.309 "max_namespaces": 32, 00:11:58.309 "min_cntlid": 1, 00:11:58.309 "max_cntlid": 65519, 00:11:58.309 "namespaces": [ 00:11:58.309 { 00:11:58.310 "nsid": 1, 00:11:58.310 "bdev_name": "Malloc1", 00:11:58.310 "name": "Malloc1", 00:11:58.310 "nguid": "83046F14443E46208A46B41BD47BA08E", 00:11:58.310 "uuid": "83046f14-443e-4620-8a46-b41bd47ba08e" 00:11:58.310 }, 00:11:58.310 { 00:11:58.310 "nsid": 2, 00:11:58.310 "bdev_name": "Malloc3", 00:11:58.310 "name": "Malloc3", 00:11:58.310 "nguid": "93596A67764D4CD19DC9DD3333D8D609", 00:11:58.310 "uuid": "93596a67-764d-4cd1-9dc9-dd3333d8d609" 00:11:58.310 } 00:11:58.310 ] 00:11:58.310 }, 00:11:58.310 { 00:11:58.310 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:58.310 "subtype": "NVMe", 00:11:58.310 "listen_addresses": [ 00:11:58.310 { 00:11:58.310 "trtype": "VFIOUSER", 00:11:58.310 "adrfam": "IPv4", 00:11:58.310 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:58.310 "trsvcid": "0" 00:11:58.310 } 00:11:58.310 ], 00:11:58.310 "allow_any_host": true, 00:11:58.310 "hosts": [], 00:11:58.310 "serial_number": "SPDK2", 00:11:58.310 "model_number": "SPDK bdev Controller", 00:11:58.310 "max_namespaces": 32, 00:11:58.310 "min_cntlid": 1, 00:11:58.310 "max_cntlid": 65519, 00:11:58.310 "namespaces": [ 00:11:58.310 { 00:11:58.310 "nsid": 1, 00:11:58.310 "bdev_name": "Malloc2", 00:11:58.310 "name": "Malloc2", 00:11:58.310 "nguid": "43BF26B989F94A27BE254C435622CD37", 00:11:58.310 "uuid": "43bf26b9-89f9-4a27-be25-4c435622cd37" 00:11:58.310 } 00:11:58.310 ] 00:11:58.310 } 00:11:58.310 ] 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2617569 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:58.310 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:58.310 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.310 [2024-07-15 20:37:32.745907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:58.310 Malloc4 00:11:58.569 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:58.569 [2024-07-15 20:37:32.948389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:58.569 20:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:58.569 Asynchronous Event Request test 00:11:58.569 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:58.569 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:58.569 Registering asynchronous event callbacks... 00:11:58.569 Starting namespace attribute notice tests for all controllers... 00:11:58.569 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:58.569 aer_cb - Changed Namespace 00:11:58.569 Cleaning up... 00:11:58.828 [ 00:11:58.829 { 00:11:58.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.829 "subtype": "Discovery", 00:11:58.829 "listen_addresses": [], 00:11:58.829 "allow_any_host": true, 00:11:58.829 "hosts": [] 00:11:58.829 }, 00:11:58.829 { 00:11:58.829 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:58.829 "subtype": "NVMe", 00:11:58.829 "listen_addresses": [ 00:11:58.829 { 00:11:58.829 "trtype": "VFIOUSER", 00:11:58.829 "adrfam": "IPv4", 00:11:58.829 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:58.829 "trsvcid": "0" 00:11:58.829 } 00:11:58.829 ], 00:11:58.829 "allow_any_host": true, 00:11:58.829 "hosts": [], 00:11:58.829 "serial_number": "SPDK1", 00:11:58.829 "model_number": "SPDK bdev Controller", 00:11:58.829 "max_namespaces": 32, 00:11:58.829 "min_cntlid": 1, 00:11:58.829 "max_cntlid": 65519, 00:11:58.829 "namespaces": [ 00:11:58.829 { 00:11:58.829 "nsid": 1, 00:11:58.829 "bdev_name": "Malloc1", 00:11:58.829 "name": "Malloc1", 00:11:58.829 "nguid": "83046F14443E46208A46B41BD47BA08E", 00:11:58.829 "uuid": "83046f14-443e-4620-8a46-b41bd47ba08e" 00:11:58.829 }, 00:11:58.829 { 00:11:58.829 "nsid": 2, 00:11:58.829 "bdev_name": "Malloc3", 00:11:58.829 "name": "Malloc3", 00:11:58.829 "nguid": "93596A67764D4CD19DC9DD3333D8D609", 00:11:58.829 "uuid": "93596a67-764d-4cd1-9dc9-dd3333d8d609" 00:11:58.829 } 00:11:58.829 ] 00:11:58.829 }, 00:11:58.829 { 00:11:58.829 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:58.829 "subtype": "NVMe", 00:11:58.829 "listen_addresses": [ 00:11:58.829 { 00:11:58.829 "trtype": "VFIOUSER", 00:11:58.829 "adrfam": "IPv4", 00:11:58.829 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:58.829 "trsvcid": "0" 00:11:58.829 } 00:11:58.829 ], 00:11:58.829 "allow_any_host": true, 00:11:58.829 "hosts": [], 00:11:58.829 "serial_number": "SPDK2", 00:11:58.829 "model_number": "SPDK bdev Controller", 00:11:58.829 "max_namespaces": 32, 00:11:58.829 "min_cntlid": 1, 00:11:58.829 "max_cntlid": 65519, 00:11:58.829 "namespaces": [ 00:11:58.829 { 00:11:58.829 "nsid": 1, 00:11:58.829 "bdev_name": "Malloc2", 00:11:58.829 "name": "Malloc2", 00:11:58.829 "nguid": "43BF26B989F94A27BE254C435622CD37", 00:11:58.829 "uuid": "43bf26b9-89f9-4a27-be25-4c435622cd37" 00:11:58.829 }, 00:11:58.829 { 00:11:58.829 "nsid": 2, 00:11:58.829 "bdev_name": "Malloc4", 00:11:58.829 "name": "Malloc4", 00:11:58.829 "nguid": "573A7990385343FF8FCED342BD23B2CC", 00:11:58.829 "uuid": "573a7990-3853-43ff-8fce-d342bd23b2cc" 00:11:58.829 } 00:11:58.829 ] 00:11:58.829 } 00:11:58.829 ] 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2617569 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2609711 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2609711 ']' 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2609711 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2609711 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2609711' 00:11:58.829 killing process with pid 2609711 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2609711 00:11:58.829 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2609711 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2617737 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2617737' 00:11:59.088 Process pid: 2617737 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2617737 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2617737 ']' 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.088 20:37:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:59.088 [2024-07-15 20:37:33.502514] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:59.088 [2024-07-15 20:37:33.503367] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:11:59.088 [2024-07-15 20:37:33.503406] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.088 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.088 [2024-07-15 20:37:33.557520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.347 [2024-07-15 20:37:33.632057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.347 [2024-07-15 20:37:33.632101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.347 [2024-07-15 20:37:33.632108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.347 [2024-07-15 20:37:33.632114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.347 [2024-07-15 20:37:33.632118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.347 [2024-07-15 20:37:33.632166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.347 [2024-07-15 20:37:33.632266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.347 [2024-07-15 20:37:33.632317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.347 [2024-07-15 20:37:33.632318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.347 [2024-07-15 20:37:33.711202] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:59.347 [2024-07-15 20:37:33.711307] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:59.347 [2024-07-15 20:37:33.711518] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:59.347 [2024-07-15 20:37:33.711882] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:59.347 [2024-07-15 20:37:33.712132] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:59.914 20:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.914 20:37:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:59.914 20:37:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:00.851 20:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:01.110 20:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:01.110 20:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:01.110 20:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:01.110 20:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:01.110 20:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:01.369 Malloc1 00:12:01.369 20:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:01.628 20:37:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:01.628 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:01.887 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:01.887 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:01.887 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:02.145 Malloc2 00:12:02.145 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:02.145 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:02.404 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:02.663 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:02.663 20:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2617737 00:12:02.663 20:37:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2617737 ']' 00:12:02.663 20:37:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2617737 00:12:02.663 20:37:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:02.663 20:37:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.663 20:37:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2617737 00:12:02.663 20:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:02.663 20:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:02.663 20:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2617737' 00:12:02.663 killing process with pid 2617737 00:12:02.663 20:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2617737 00:12:02.663 20:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2617737 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:02.922 00:12:02.922 real 0m52.328s 00:12:02.922 user 3m27.221s 00:12:02.922 sys 0m3.535s 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:02.922 ************************************ 00:12:02.922 END TEST nvmf_vfio_user 00:12:02.922 ************************************ 00:12:02.922 20:37:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:02.922 20:37:37 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:02.922 20:37:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:02.922 20:37:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.922 20:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.922 ************************************ 00:12:02.922 START TEST nvmf_vfio_user_nvme_compliance 00:12:02.922 ************************************ 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:02.922 * Looking for test storage... 00:12:02.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.922 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.181 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.181 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.181 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2618405 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2618405' 00:12:03.182 Process pid: 2618405 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2618405 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2618405 ']' 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.182 20:37:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:03.182 [2024-07-15 20:37:37.468104] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:03.182 [2024-07-15 20:37:37.468154] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.182 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.182 [2024-07-15 20:37:37.522921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:03.182 [2024-07-15 20:37:37.602518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.182 [2024-07-15 20:37:37.602553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.182 [2024-07-15 20:37:37.602560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.182 [2024-07-15 20:37:37.602566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.182 [2024-07-15 20:37:37.602571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.182 [2024-07-15 20:37:37.602612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.182 [2024-07-15 20:37:37.602710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.182 [2024-07-15 20:37:37.602710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.118 20:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.118 20:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:04.118 20:37:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:05.054 malloc0 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.054 20:37:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:05.054 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.054 00:12:05.054 00:12:05.054 CUnit - A unit testing framework for C - Version 2.1-3 00:12:05.054 http://cunit.sourceforge.net/ 00:12:05.054 00:12:05.054 00:12:05.054 Suite: nvme_compliance 00:12:05.055 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 20:37:39.495034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.055 [2024-07-15 20:37:39.496393] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:05.055 [2024-07-15 20:37:39.496407] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:05.055 [2024-07-15 20:37:39.496413] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:05.055 [2024-07-15 20:37:39.498055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.055 passed 00:12:05.313 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 20:37:39.576628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.313 [2024-07-15 20:37:39.579647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.313 passed 00:12:05.313 Test: admin_identify_ns ...[2024-07-15 20:37:39.659582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.313 [2024-07-15 20:37:39.720238] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:05.313 [2024-07-15 20:37:39.728235] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:05.313 [2024-07-15 20:37:39.749322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.313 passed 00:12:05.571 Test: admin_get_features_mandatory_features ...[2024-07-15 20:37:39.827514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.571 [2024-07-15 20:37:39.830534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.571 passed 00:12:05.571 Test: admin_get_features_optional_features ...[2024-07-15 20:37:39.906043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.571 [2024-07-15 20:37:39.909067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.571 passed 00:12:05.571 Test: admin_set_features_number_of_queues ...[2024-07-15 20:37:39.986652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.829 [2024-07-15 20:37:40.094403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.829 passed 00:12:05.829 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 20:37:40.169794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.829 [2024-07-15 20:37:40.172816] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.829 passed 00:12:05.829 Test: admin_get_log_page_with_lpo ...[2024-07-15 20:37:40.251765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.087 [2024-07-15 20:37:40.323236] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:06.087 [2024-07-15 20:37:40.336294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.087 passed 00:12:06.087 Test: fabric_property_get ...[2024-07-15 20:37:40.412625] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.087 [2024-07-15 20:37:40.413862] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:06.087 [2024-07-15 20:37:40.415644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.087 passed 00:12:06.087 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 20:37:40.493146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.087 [2024-07-15 20:37:40.494385] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:06.087 [2024-07-15 20:37:40.496161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.087 passed 00:12:06.346 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 20:37:40.573089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.346 [2024-07-15 20:37:40.657233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:06.346 [2024-07-15 20:37:40.673231] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:06.346 [2024-07-15 20:37:40.678319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.346 passed 00:12:06.346 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 20:37:40.753452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.346 [2024-07-15 20:37:40.754691] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:06.346 [2024-07-15 20:37:40.756481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.346 passed 00:12:06.650 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 20:37:40.834271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.650 [2024-07-15 20:37:40.914233] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:06.650 [2024-07-15 20:37:40.938233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:06.650 [2024-07-15 20:37:40.943313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.650 passed 00:12:06.650 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 20:37:41.017510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.650 [2024-07-15 20:37:41.018738] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:06.650 [2024-07-15 20:37:41.018764] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:06.650 [2024-07-15 20:37:41.020534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.650 passed 00:12:06.650 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 20:37:41.095486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.908 [2024-07-15 20:37:41.188238] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:06.908 [2024-07-15 20:37:41.196228] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:06.908 [2024-07-15 20:37:41.204236] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:06.908 [2024-07-15 20:37:41.212232] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:06.908 [2024-07-15 20:37:41.241306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.908 passed 00:12:06.908 Test: admin_create_io_sq_verify_pc ...[2024-07-15 20:37:41.318426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.908 [2024-07-15 20:37:41.335237] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:06.908 [2024-07-15 20:37:41.352600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:06.908 passed 00:12:07.166 Test: admin_create_io_qp_max_qps ...[2024-07-15 20:37:41.430117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.101 [2024-07-15 20:37:42.536234] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:08.669 [2024-07-15 20:37:42.931447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.669 passed 00:12:08.669 Test: admin_create_io_sq_shared_cq ...[2024-07-15 20:37:43.009687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.669 [2024-07-15 20:37:43.145233] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:08.933 [2024-07-15 20:37:43.182307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.933 passed 00:12:08.933 00:12:08.933 Run Summary: Type Total Ran Passed Failed Inactive 00:12:08.933 suites 1 1 n/a 0 0 00:12:08.933 tests 18 18 18 0 0 00:12:08.933 asserts 360 360 360 0 n/a 00:12:08.933 00:12:08.933 Elapsed time = 1.522 seconds 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2618405 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2618405 ']' 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2618405 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2618405 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2618405' 00:12:08.933 killing process with pid 2618405 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2618405 00:12:08.933 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2618405 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:09.203 00:12:09.203 real 0m6.174s 00:12:09.203 user 0m17.671s 00:12:09.203 sys 0m0.432s 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:09.203 ************************************ 00:12:09.203 END TEST nvmf_vfio_user_nvme_compliance 00:12:09.203 ************************************ 00:12:09.203 20:37:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:09.203 20:37:43 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:09.203 20:37:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.203 20:37:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.203 20:37:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.203 ************************************ 00:12:09.203 START TEST nvmf_vfio_user_fuzz 00:12:09.203 ************************************ 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:09.203 * Looking for test storage... 00:12:09.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.203 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2619556 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2619556' 00:12:09.204 Process pid: 2619556 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2619556 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2619556 ']' 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.204 20:37:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:10.140 20:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.140 20:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:10.140 20:37:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:11.076 malloc0 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.076 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:11.333 20:37:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:43.488 Fuzzing completed. Shutting down the fuzz application 00:12:43.488 00:12:43.488 Dumping successful admin opcodes: 00:12:43.488 8, 9, 10, 24, 00:12:43.488 Dumping successful io opcodes: 00:12:43.488 0, 00:12:43.488 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1013788, total successful commands: 3977, random_seed: 127144256 00:12:43.488 NS: 0x200003a1ef00 admin qp, Total commands completed: 251279, total successful commands: 2030, random_seed: 1280538432 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2619556 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2619556 ']' 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2619556 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2619556 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2619556' 00:12:43.488 killing process with pid 2619556 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2619556 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2619556 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:43.488 00:12:43.488 real 0m32.918s 00:12:43.488 user 0m31.287s 00:12:43.488 sys 0m30.743s 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:43.488 20:38:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:43.488 ************************************ 00:12:43.488 END TEST nvmf_vfio_user_fuzz 00:12:43.488 ************************************ 00:12:43.488 20:38:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:43.488 20:38:16 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:43.488 20:38:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:43.488 20:38:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.488 20:38:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:43.488 ************************************ 00:12:43.488 START TEST nvmf_host_management 00:12:43.488 ************************************ 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:43.488 * Looking for test storage... 00:12:43.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.488 20:38:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.489 20:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:47.674 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:47.674 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:47.674 Found net devices under 0000:86:00.0: cvl_0_0 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:47.674 Found net devices under 0000:86:00.1: cvl_0_1 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.674 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:12:47.675 00:12:47.675 --- 10.0.0.2 ping statistics --- 00:12:47.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.675 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:12:47.675 00:12:47.675 --- 10.0.0.1 ping statistics --- 00:12:47.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.675 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2627974 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2627974 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2627974 ']' 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:47.675 20:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:47.675 [2024-07-15 20:38:21.832933] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:47.675 [2024-07-15 20:38:21.832978] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.675 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.675 [2024-07-15 20:38:21.889821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.675 [2024-07-15 20:38:21.971695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.675 [2024-07-15 20:38:21.971730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.675 [2024-07-15 20:38:21.971737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.675 [2024-07-15 20:38:21.971743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.675 [2024-07-15 20:38:21.971748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.675 [2024-07-15 20:38:21.971786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.675 [2024-07-15 20:38:21.971869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.675 [2024-07-15 20:38:21.971975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.675 [2024-07-15 20:38:21.971976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.242 [2024-07-15 20:38:22.683083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.242 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.242 Malloc0 00:12:48.500 [2024-07-15 20:38:22.742607] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2628119 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2628119 /var/tmp/bdevperf.sock 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2628119 ']' 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:48.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:48.500 20:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:48.501 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:48.501 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:48.501 { 00:12:48.501 "params": { 00:12:48.501 "name": "Nvme$subsystem", 00:12:48.501 "trtype": "$TEST_TRANSPORT", 00:12:48.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:48.501 "adrfam": "ipv4", 00:12:48.501 "trsvcid": "$NVMF_PORT", 00:12:48.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:48.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:48.501 "hdgst": ${hdgst:-false}, 00:12:48.501 "ddgst": ${ddgst:-false} 00:12:48.501 }, 00:12:48.501 "method": "bdev_nvme_attach_controller" 00:12:48.501 } 00:12:48.501 EOF 00:12:48.501 )") 00:12:48.501 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:48.501 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:48.501 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:48.501 20:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:48.501 "params": { 00:12:48.501 "name": "Nvme0", 00:12:48.501 "trtype": "tcp", 00:12:48.501 "traddr": "10.0.0.2", 00:12:48.501 "adrfam": "ipv4", 00:12:48.501 "trsvcid": "4420", 00:12:48.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:48.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:48.501 "hdgst": false, 00:12:48.501 "ddgst": false 00:12:48.501 }, 00:12:48.501 "method": "bdev_nvme_attach_controller" 00:12:48.501 }' 00:12:48.501 [2024-07-15 20:38:22.835238] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:48.501 [2024-07-15 20:38:22.835285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628119 ] 00:12:48.501 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.501 [2024-07-15 20:38:22.889912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.501 [2024-07-15 20:38:22.963187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.066 Running I/O for 10 seconds... 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.326 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.326 [2024-07-15 20:38:23.725911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.725953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.725960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.725966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.725972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.725978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.725984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.725990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.725996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.726002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.326 [2024-07-15 20:38:23.726007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259e460 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.327 [2024-07-15 20:38:23.726373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.327 [2024-07-15 20:38:23.726391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.327 [2024-07-15 20:38:23.726405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.327 [2024-07-15 20:38:23.726423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd4980 is same with the state(5) to be set 00:12:49.327 [2024-07-15 20:38:23.726513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.327 [2024-07-15 20:38:23.726737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.327 [2024-07-15 20:38:23.726744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.726986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.726995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.328 [2024-07-15 20:38:23.727429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.328 [2024-07-15 20:38:23.727436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.329 [2024-07-15 20:38:23.727453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.329 [2024-07-15 20:38:23.727469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.329 [2024-07-15 20:38:23.727485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.329 [2024-07-15 20:38:23.727501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.329 [2024-07-15 20:38:23.727520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.329 [2024-07-15 20:38:23.727538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.329 [2024-07-15 20:38:23.727557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:49.329 [2024-07-15 20:38:23.727574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.329 [2024-07-15 20:38:23.727582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e5b20 is same with the state(5) to be set 00:12:49.329 [2024-07-15 20:38:23.727635] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10e5b20 was disconnected and freed. reset controller. 00:12:49.329 [2024-07-15 20:38:23.728558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:49.329 task offset: 90112 on job bdev=Nvme0n1 fails 00:12:49.329 00:12:49.329 Latency(us) 00:12:49.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.329 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:49.329 Job: Nvme0n1 ended in about 0.45 seconds with error 00:12:49.329 Verification LBA range: start 0x0 length 0x400 00:12:49.329 Nvme0n1 : 0.45 1566.77 97.92 142.43 0.00 36543.93 8149.26 31457.28 00:12:49.329 =================================================================================================================== 00:12:49.329 Total : 1566.77 97.92 142.43 0.00 36543.93 8149.26 31457.28 00:12:49.329 [2024-07-15 20:38:23.730141] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:49.329 [2024-07-15 20:38:23.730157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4980 (9): Bad file descriptor 00:12:49.329 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.329 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:49.329 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.329 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.329 [2024-07-15 20:38:23.738152] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:49.329 20:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.329 20:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:50.264 20:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2628119 00:12:50.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2628119) - No such process 00:12:50.264 20:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:50.264 20:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:50.522 { 00:12:50.522 "params": { 00:12:50.522 "name": "Nvme$subsystem", 00:12:50.522 "trtype": "$TEST_TRANSPORT", 00:12:50.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.522 "adrfam": "ipv4", 00:12:50.522 "trsvcid": "$NVMF_PORT", 00:12:50.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.522 "hdgst": ${hdgst:-false}, 00:12:50.522 "ddgst": ${ddgst:-false} 00:12:50.522 }, 00:12:50.522 "method": "bdev_nvme_attach_controller" 00:12:50.522 } 00:12:50.522 EOF 00:12:50.522 )") 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:50.522 20:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:50.522 "params": { 00:12:50.522 "name": "Nvme0", 00:12:50.522 "trtype": "tcp", 00:12:50.523 "traddr": "10.0.0.2", 00:12:50.523 "adrfam": "ipv4", 00:12:50.523 "trsvcid": "4420", 00:12:50.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:50.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:50.523 "hdgst": false, 00:12:50.523 "ddgst": false 00:12:50.523 }, 00:12:50.523 "method": "bdev_nvme_attach_controller" 00:12:50.523 }' 00:12:50.523 [2024-07-15 20:38:24.789989] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:50.523 [2024-07-15 20:38:24.790039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628583 ] 00:12:50.523 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.523 [2024-07-15 20:38:24.842552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.523 [2024-07-15 20:38:24.913765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.781 Running I/O for 1 seconds... 00:12:52.156 00:12:52.156 Latency(us) 00:12:52.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.156 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:52.156 Verification LBA range: start 0x0 length 0x400 00:12:52.156 Nvme0n1 : 1.02 1688.91 105.56 0.00 0.00 37337.25 9630.94 32141.13 00:12:52.156 =================================================================================================================== 00:12:52.157 Total : 1688.91 105.56 0.00 0.00 37337.25 9630.94 32141.13 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.157 rmmod nvme_tcp 00:12:52.157 rmmod nvme_fabrics 00:12:52.157 rmmod nvme_keyring 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2627974 ']' 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2627974 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2627974 ']' 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2627974 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2627974 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2627974' 00:12:52.157 killing process with pid 2627974 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2627974 00:12:52.157 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2627974 00:12:52.415 [2024-07-15 20:38:26.706095] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:52.415 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.415 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.415 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.415 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.415 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.415 20:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.415 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.415 20:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.345 20:38:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.345 20:38:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:54.345 00:12:54.345 real 0m12.268s 00:12:54.345 user 0m23.002s 00:12:54.345 sys 0m4.911s 00:12:54.345 20:38:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.345 20:38:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.346 ************************************ 00:12:54.346 END TEST nvmf_host_management 00:12:54.346 ************************************ 00:12:54.604 20:38:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:54.604 20:38:28 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:54.604 20:38:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:54.604 20:38:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.604 20:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.604 ************************************ 00:12:54.604 START TEST nvmf_lvol 00:12:54.604 ************************************ 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:54.604 * Looking for test storage... 00:12:54.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.604 20:38:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.605 20:38:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.939 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:59.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:59.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:59.940 Found net devices under 0000:86:00.0: cvl_0_0 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:59.940 Found net devices under 0000:86:00.1: cvl_0_1 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:12:59.940 00:12:59.940 --- 10.0.0.2 ping statistics --- 00:12:59.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.940 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:12:59.940 00:12:59.940 --- 10.0.0.1 ping statistics --- 00:12:59.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.940 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2632271 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2632271 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2632271 ']' 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.940 20:38:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:59.940 [2024-07-15 20:38:34.348518] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:12:59.940 [2024-07-15 20:38:34.348564] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.940 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.940 [2024-07-15 20:38:34.406942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.199 [2024-07-15 20:38:34.488439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.200 [2024-07-15 20:38:34.488472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.200 [2024-07-15 20:38:34.488478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.200 [2024-07-15 20:38:34.488485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.200 [2024-07-15 20:38:34.488490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.200 [2024-07-15 20:38:34.488531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.200 [2024-07-15 20:38:34.488627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.200 [2024-07-15 20:38:34.488629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.767 20:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.767 20:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:00.767 20:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.767 20:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.767 20:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:00.767 20:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.767 20:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:01.026 [2024-07-15 20:38:35.338241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.026 20:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:01.285 20:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:01.285 20:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:01.285 20:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:01.285 20:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:01.544 20:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:01.802 20:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a0466656-880b-4c3f-8f4c-c358a7eb40d8 00:13:01.802 20:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0466656-880b-4c3f-8f4c-c358a7eb40d8 lvol 20 00:13:02.061 20:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=901f3e67-806c-475b-901f-4e4fb873b67e 00:13:02.061 20:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:02.061 20:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 901f3e67-806c-475b-901f-4e4fb873b67e 00:13:02.320 20:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:02.578 [2024-07-15 20:38:36.820893] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.578 20:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.578 20:38:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2632734 00:13:02.578 20:38:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:02.578 20:38:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:02.837 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.773 20:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 901f3e67-806c-475b-901f-4e4fb873b67e MY_SNAPSHOT 00:13:04.031 20:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a911004d-1f91-4816-937b-2dc441c68e0c 00:13:04.031 20:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 901f3e67-806c-475b-901f-4e4fb873b67e 30 00:13:04.031 20:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a911004d-1f91-4816-937b-2dc441c68e0c MY_CLONE 00:13:04.290 20:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1d7f3ea1-47bb-46d9-8fdb-1919e049040d 00:13:04.290 20:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1d7f3ea1-47bb-46d9-8fdb-1919e049040d 00:13:04.879 20:38:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2632734 00:13:12.994 Initializing NVMe Controllers 00:13:12.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:12.994 Controller IO queue size 128, less than required. 00:13:12.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:12.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:12.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:12.994 Initialization complete. Launching workers. 00:13:12.994 ======================================================== 00:13:12.994 Latency(us) 00:13:12.994 Device Information : IOPS MiB/s Average min max 00:13:12.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12477.80 48.74 10259.46 1551.07 49451.41 00:13:12.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12358.40 48.27 10358.02 3462.46 61190.23 00:13:12.994 ======================================================== 00:13:12.994 Total : 24836.20 97.02 10308.50 1551.07 61190.23 00:13:12.994 00:13:12.994 20:38:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:13.252 20:38:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 901f3e67-806c-475b-901f-4e4fb873b67e 00:13:13.252 20:38:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0466656-880b-4c3f-8f4c-c358a7eb40d8 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.510 rmmod nvme_tcp 00:13:13.510 rmmod nvme_fabrics 00:13:13.510 rmmod nvme_keyring 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2632271 ']' 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2632271 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2632271 ']' 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2632271 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2632271 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2632271' 00:13:13.510 killing process with pid 2632271 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2632271 00:13:13.510 20:38:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2632271 00:13:13.769 20:38:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.769 20:38:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.769 20:38:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.769 20:38:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.769 20:38:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.769 20:38:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.769 20:38:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.769 20:38:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.302 00:13:16.302 real 0m21.385s 00:13:16.302 user 1m3.411s 00:13:16.302 sys 0m6.765s 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:16.302 ************************************ 00:13:16.302 END TEST nvmf_lvol 00:13:16.302 ************************************ 00:13:16.302 20:38:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:16.302 20:38:50 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:16.302 20:38:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:16.302 20:38:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.302 20:38:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:16.302 ************************************ 00:13:16.302 START TEST nvmf_lvs_grow 00:13:16.302 ************************************ 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:16.302 * Looking for test storage... 00:13:16.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.302 20:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:21.568 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:21.568 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:21.568 Found net devices under 0000:86:00.0: cvl_0_0 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:21.568 Found net devices under 0000:86:00.1: cvl_0_1 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:13:21.568 00:13:21.568 --- 10.0.0.2 ping statistics --- 00:13:21.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.568 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:13:21.568 00:13:21.568 --- 10.0.0.1 ping statistics --- 00:13:21.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.568 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2637966 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2637966 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2637966 ']' 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.568 20:38:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.569 20:38:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.569 20:38:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:21.569 [2024-07-15 20:38:55.716740] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:21.569 [2024-07-15 20:38:55.716785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.569 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.569 [2024-07-15 20:38:55.774147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.569 [2024-07-15 20:38:55.853680] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.569 [2024-07-15 20:38:55.853714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.569 [2024-07-15 20:38:55.853721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.569 [2024-07-15 20:38:55.853727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.569 [2024-07-15 20:38:55.853733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.569 [2024-07-15 20:38:55.853751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.135 20:38:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.135 20:38:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:22.135 20:38:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.135 20:38:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.135 20:38:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:22.135 20:38:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.135 20:38:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:22.394 [2024-07-15 20:38:56.717527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:22.394 ************************************ 00:13:22.394 START TEST lvs_grow_clean 00:13:22.394 ************************************ 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:22.394 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:22.651 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:22.652 20:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:22.910 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:22.910 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:22.910 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:22.910 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:22.910 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:22.910 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 lvol 150 00:13:23.177 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d51030c-18d5-410e-8e66-a4a39bb8a8af 00:13:23.177 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:23.177 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:23.177 [2024-07-15 20:38:57.632856] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:23.177 [2024-07-15 20:38:57.632904] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:23.177 true 00:13:23.177 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:23.177 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:23.468 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:23.468 20:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:23.726 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d51030c-18d5-410e-8e66-a4a39bb8a8af 00:13:23.726 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:23.983 [2024-07-15 20:38:58.350991] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.983 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2638470 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2638470 /var/tmp/bdevperf.sock 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2638470 ']' 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:24.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.241 20:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:24.241 [2024-07-15 20:38:58.582408] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:24.241 [2024-07-15 20:38:58.582458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638470 ] 00:13:24.241 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.241 [2024-07-15 20:38:58.635705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.241 [2024-07-15 20:38:58.707717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.174 20:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.174 20:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:25.174 20:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:25.432 Nvme0n1 00:13:25.432 20:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:25.690 [ 00:13:25.690 { 00:13:25.690 "name": "Nvme0n1", 00:13:25.690 "aliases": [ 00:13:25.690 "1d51030c-18d5-410e-8e66-a4a39bb8a8af" 00:13:25.690 ], 00:13:25.690 "product_name": "NVMe disk", 00:13:25.690 "block_size": 4096, 00:13:25.690 "num_blocks": 38912, 00:13:25.690 "uuid": "1d51030c-18d5-410e-8e66-a4a39bb8a8af", 00:13:25.690 "assigned_rate_limits": { 00:13:25.690 "rw_ios_per_sec": 0, 00:13:25.690 "rw_mbytes_per_sec": 0, 00:13:25.690 "r_mbytes_per_sec": 0, 00:13:25.690 "w_mbytes_per_sec": 0 00:13:25.690 }, 00:13:25.690 "claimed": false, 00:13:25.690 "zoned": false, 00:13:25.690 "supported_io_types": { 00:13:25.690 "read": true, 00:13:25.690 "write": true, 00:13:25.690 "unmap": true, 00:13:25.690 "flush": true, 00:13:25.690 "reset": true, 00:13:25.690 "nvme_admin": true, 00:13:25.690 "nvme_io": true, 00:13:25.690 "nvme_io_md": false, 00:13:25.690 "write_zeroes": true, 00:13:25.690 "zcopy": false, 00:13:25.690 "get_zone_info": false, 00:13:25.690 "zone_management": false, 00:13:25.690 "zone_append": false, 00:13:25.690 "compare": true, 00:13:25.690 "compare_and_write": true, 00:13:25.690 "abort": true, 00:13:25.690 "seek_hole": false, 00:13:25.690 "seek_data": false, 00:13:25.690 "copy": true, 00:13:25.690 "nvme_iov_md": false 00:13:25.690 }, 00:13:25.690 "memory_domains": [ 00:13:25.690 { 00:13:25.690 "dma_device_id": "system", 00:13:25.690 "dma_device_type": 1 00:13:25.690 } 00:13:25.690 ], 00:13:25.690 "driver_specific": { 00:13:25.690 "nvme": [ 00:13:25.690 { 00:13:25.690 "trid": { 00:13:25.690 "trtype": "TCP", 00:13:25.690 "adrfam": "IPv4", 00:13:25.690 "traddr": "10.0.0.2", 00:13:25.690 "trsvcid": "4420", 00:13:25.690 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:25.690 }, 00:13:25.690 "ctrlr_data": { 00:13:25.690 "cntlid": 1, 00:13:25.690 "vendor_id": "0x8086", 00:13:25.690 "model_number": "SPDK bdev Controller", 00:13:25.690 "serial_number": "SPDK0", 00:13:25.690 "firmware_revision": "24.09", 00:13:25.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:25.690 "oacs": { 00:13:25.690 "security": 0, 00:13:25.690 "format": 0, 00:13:25.690 "firmware": 0, 00:13:25.690 "ns_manage": 0 00:13:25.690 }, 00:13:25.690 "multi_ctrlr": true, 00:13:25.690 "ana_reporting": false 00:13:25.690 }, 00:13:25.690 "vs": { 00:13:25.690 "nvme_version": "1.3" 00:13:25.690 }, 00:13:25.690 "ns_data": { 00:13:25.690 "id": 1, 00:13:25.690 "can_share": true 00:13:25.690 } 00:13:25.690 } 00:13:25.690 ], 00:13:25.690 "mp_policy": "active_passive" 00:13:25.690 } 00:13:25.690 } 00:13:25.690 ] 00:13:25.690 20:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2638713 00:13:25.690 20:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:25.690 20:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:25.690 Running I/O for 10 seconds... 00:13:26.622 Latency(us) 00:13:26.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.622 Nvme0n1 : 1.00 21549.00 84.18 0.00 0.00 0.00 0.00 0.00 00:13:26.622 =================================================================================================================== 00:13:26.622 Total : 21549.00 84.18 0.00 0.00 0.00 0.00 0.00 00:13:26.622 00:13:27.555 20:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:27.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.813 Nvme0n1 : 2.00 21766.50 85.03 0.00 0.00 0.00 0.00 0.00 00:13:27.813 =================================================================================================================== 00:13:27.813 Total : 21766.50 85.03 0.00 0.00 0.00 0.00 0.00 00:13:27.813 00:13:27.813 true 00:13:27.813 20:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:27.813 20:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:28.071 20:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:28.071 20:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:28.071 20:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2638713 00:13:28.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:28.637 Nvme0n1 : 3.00 21844.33 85.33 0.00 0.00 0.00 0.00 0.00 00:13:28.637 =================================================================================================================== 00:13:28.637 Total : 21844.33 85.33 0.00 0.00 0.00 0.00 0.00 00:13:28.637 00:13:30.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.010 Nvme0n1 : 4.00 21909.25 85.58 0.00 0.00 0.00 0.00 0.00 00:13:30.010 =================================================================================================================== 00:13:30.010 Total : 21909.25 85.58 0.00 0.00 0.00 0.00 0.00 00:13:30.010 00:13:30.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.946 Nvme0n1 : 5.00 21945.00 85.72 0.00 0.00 0.00 0.00 0.00 00:13:30.946 =================================================================================================================== 00:13:30.946 Total : 21945.00 85.72 0.00 0.00 0.00 0.00 0.00 00:13:30.946 00:13:31.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:31.879 Nvme0n1 : 6.00 21978.17 85.85 0.00 0.00 0.00 0.00 0.00 00:13:31.879 =================================================================================================================== 00:13:31.879 Total : 21978.17 85.85 0.00 0.00 0.00 0.00 0.00 00:13:31.879 00:13:32.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.813 Nvme0n1 : 7.00 22003.00 85.95 0.00 0.00 0.00 0.00 0.00 00:13:32.813 =================================================================================================================== 00:13:32.813 Total : 22003.00 85.95 0.00 0.00 0.00 0.00 0.00 00:13:32.813 00:13:33.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.753 Nvme0n1 : 8.00 22023.62 86.03 0.00 0.00 0.00 0.00 0.00 00:13:33.753 =================================================================================================================== 00:13:33.753 Total : 22023.62 86.03 0.00 0.00 0.00 0.00 0.00 00:13:33.753 00:13:34.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.690 Nvme0n1 : 9.00 22075.22 86.23 0.00 0.00 0.00 0.00 0.00 00:13:34.690 =================================================================================================================== 00:13:34.690 Total : 22075.22 86.23 0.00 0.00 0.00 0.00 0.00 00:13:34.690 00:13:36.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.067 Nvme0n1 : 10.00 22099.70 86.33 0.00 0.00 0.00 0.00 0.00 00:13:36.067 =================================================================================================================== 00:13:36.067 Total : 22099.70 86.33 0.00 0.00 0.00 0.00 0.00 00:13:36.067 00:13:36.067 00:13:36.067 Latency(us) 00:13:36.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.067 Nvme0n1 : 10.01 22099.52 86.33 0.00 0.00 5787.84 4331.07 14417.92 00:13:36.067 =================================================================================================================== 00:13:36.067 Total : 22099.52 86.33 0.00 0.00 5787.84 4331.07 14417.92 00:13:36.067 0 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2638470 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2638470 ']' 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2638470 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2638470 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2638470' 00:13:36.067 killing process with pid 2638470 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2638470 00:13:36.067 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.067 00:13:36.067 Latency(us) 00:13:36.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.067 =================================================================================================================== 00:13:36.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2638470 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:36.067 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:36.326 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:36.326 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:36.584 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:36.584 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:36.584 20:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:36.584 [2024-07-15 20:39:11.051702] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:36.844 request: 00:13:36.844 { 00:13:36.844 "uuid": "741a62b0-511b-4ae8-bdfb-ab01d31a1767", 00:13:36.844 "method": "bdev_lvol_get_lvstores", 00:13:36.844 "req_id": 1 00:13:36.844 } 00:13:36.844 Got JSON-RPC error response 00:13:36.844 response: 00:13:36.844 { 00:13:36.844 "code": -19, 00:13:36.844 "message": "No such device" 00:13:36.844 } 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.844 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:37.103 aio_bdev 00:13:37.103 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1d51030c-18d5-410e-8e66-a4a39bb8a8af 00:13:37.103 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=1d51030c-18d5-410e-8e66-a4a39bb8a8af 00:13:37.103 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:37.103 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:37.103 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:37.103 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:37.103 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:37.362 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d51030c-18d5-410e-8e66-a4a39bb8a8af -t 2000 00:13:37.362 [ 00:13:37.362 { 00:13:37.362 "name": "1d51030c-18d5-410e-8e66-a4a39bb8a8af", 00:13:37.362 "aliases": [ 00:13:37.362 "lvs/lvol" 00:13:37.362 ], 00:13:37.362 "product_name": "Logical Volume", 00:13:37.362 "block_size": 4096, 00:13:37.362 "num_blocks": 38912, 00:13:37.362 "uuid": "1d51030c-18d5-410e-8e66-a4a39bb8a8af", 00:13:37.362 "assigned_rate_limits": { 00:13:37.362 "rw_ios_per_sec": 0, 00:13:37.362 "rw_mbytes_per_sec": 0, 00:13:37.362 "r_mbytes_per_sec": 0, 00:13:37.362 "w_mbytes_per_sec": 0 00:13:37.362 }, 00:13:37.362 "claimed": false, 00:13:37.362 "zoned": false, 00:13:37.362 "supported_io_types": { 00:13:37.362 "read": true, 00:13:37.362 "write": true, 00:13:37.362 "unmap": true, 00:13:37.362 "flush": false, 00:13:37.362 "reset": true, 00:13:37.362 "nvme_admin": false, 00:13:37.362 "nvme_io": false, 00:13:37.362 "nvme_io_md": false, 00:13:37.362 "write_zeroes": true, 00:13:37.362 "zcopy": false, 00:13:37.362 "get_zone_info": false, 00:13:37.362 "zone_management": false, 00:13:37.362 "zone_append": false, 00:13:37.362 "compare": false, 00:13:37.362 "compare_and_write": false, 00:13:37.362 "abort": false, 00:13:37.362 "seek_hole": true, 00:13:37.362 "seek_data": true, 00:13:37.363 "copy": false, 00:13:37.363 "nvme_iov_md": false 00:13:37.363 }, 00:13:37.363 "driver_specific": { 00:13:37.363 "lvol": { 00:13:37.363 "lvol_store_uuid": "741a62b0-511b-4ae8-bdfb-ab01d31a1767", 00:13:37.363 "base_bdev": "aio_bdev", 00:13:37.363 "thin_provision": false, 00:13:37.363 "num_allocated_clusters": 38, 00:13:37.363 "snapshot": false, 00:13:37.363 "clone": false, 00:13:37.363 "esnap_clone": false 00:13:37.363 } 00:13:37.363 } 00:13:37.363 } 00:13:37.363 ] 00:13:37.363 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:37.363 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:37.363 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:37.622 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:37.622 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:37.622 20:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:37.881 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:37.881 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d51030c-18d5-410e-8e66-a4a39bb8a8af 00:13:37.881 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 741a62b0-511b-4ae8-bdfb-ab01d31a1767 00:13:38.139 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:38.398 00:13:38.398 real 0m15.886s 00:13:38.398 user 0m15.544s 00:13:38.398 sys 0m1.476s 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:38.398 ************************************ 00:13:38.398 END TEST lvs_grow_clean 00:13:38.398 ************************************ 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:38.398 ************************************ 00:13:38.398 START TEST lvs_grow_dirty 00:13:38.398 ************************************ 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:38.398 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:38.657 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:38.657 20:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:38.657 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:38.657 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:38.657 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:38.918 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:38.918 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:38.918 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e18a761b-fb9b-4b6d-9229-0468f1abf23c lvol 150 00:13:39.178 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7379b08c-52ad-4baa-94b7-eaaf87c3bf6a 00:13:39.178 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:39.178 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:39.178 [2024-07-15 20:39:13.612916] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:39.178 [2024-07-15 20:39:13.612965] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:39.178 true 00:13:39.178 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:39.178 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:39.453 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:39.453 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:39.721 20:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7379b08c-52ad-4baa-94b7-eaaf87c3bf6a 00:13:39.721 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:39.980 [2024-07-15 20:39:14.270866] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2641587 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2641587 /var/tmp/bdevperf.sock 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2641587 ']' 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.980 20:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:40.239 [2024-07-15 20:39:14.502469] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:40.239 [2024-07-15 20:39:14.502519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641587 ] 00:13:40.239 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.239 [2024-07-15 20:39:14.557418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.239 [2024-07-15 20:39:14.637458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.173 20:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:41.174 20:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:41.174 20:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:41.174 Nvme0n1 00:13:41.174 20:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:41.432 [ 00:13:41.432 { 00:13:41.432 "name": "Nvme0n1", 00:13:41.432 "aliases": [ 00:13:41.432 "7379b08c-52ad-4baa-94b7-eaaf87c3bf6a" 00:13:41.432 ], 00:13:41.432 "product_name": "NVMe disk", 00:13:41.432 "block_size": 4096, 00:13:41.432 "num_blocks": 38912, 00:13:41.432 "uuid": "7379b08c-52ad-4baa-94b7-eaaf87c3bf6a", 00:13:41.432 "assigned_rate_limits": { 00:13:41.432 "rw_ios_per_sec": 0, 00:13:41.432 "rw_mbytes_per_sec": 0, 00:13:41.432 "r_mbytes_per_sec": 0, 00:13:41.432 "w_mbytes_per_sec": 0 00:13:41.432 }, 00:13:41.432 "claimed": false, 00:13:41.432 "zoned": false, 00:13:41.432 "supported_io_types": { 00:13:41.432 "read": true, 00:13:41.432 "write": true, 00:13:41.432 "unmap": true, 00:13:41.432 "flush": true, 00:13:41.432 "reset": true, 00:13:41.432 "nvme_admin": true, 00:13:41.432 "nvme_io": true, 00:13:41.432 "nvme_io_md": false, 00:13:41.432 "write_zeroes": true, 00:13:41.432 "zcopy": false, 00:13:41.432 "get_zone_info": false, 00:13:41.432 "zone_management": false, 00:13:41.432 "zone_append": false, 00:13:41.432 "compare": true, 00:13:41.432 "compare_and_write": true, 00:13:41.432 "abort": true, 00:13:41.432 "seek_hole": false, 00:13:41.432 "seek_data": false, 00:13:41.432 "copy": true, 00:13:41.432 "nvme_iov_md": false 00:13:41.432 }, 00:13:41.432 "memory_domains": [ 00:13:41.432 { 00:13:41.432 "dma_device_id": "system", 00:13:41.432 "dma_device_type": 1 00:13:41.432 } 00:13:41.432 ], 00:13:41.432 "driver_specific": { 00:13:41.432 "nvme": [ 00:13:41.432 { 00:13:41.432 "trid": { 00:13:41.432 "trtype": "TCP", 00:13:41.432 "adrfam": "IPv4", 00:13:41.432 "traddr": "10.0.0.2", 00:13:41.432 "trsvcid": "4420", 00:13:41.432 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:41.432 }, 00:13:41.432 "ctrlr_data": { 00:13:41.432 "cntlid": 1, 00:13:41.432 "vendor_id": "0x8086", 00:13:41.432 "model_number": "SPDK bdev Controller", 00:13:41.432 "serial_number": "SPDK0", 00:13:41.432 "firmware_revision": "24.09", 00:13:41.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:41.432 "oacs": { 00:13:41.432 "security": 0, 00:13:41.432 "format": 0, 00:13:41.432 "firmware": 0, 00:13:41.432 "ns_manage": 0 00:13:41.432 }, 00:13:41.432 "multi_ctrlr": true, 00:13:41.432 "ana_reporting": false 00:13:41.432 }, 00:13:41.432 "vs": { 00:13:41.432 "nvme_version": "1.3" 00:13:41.432 }, 00:13:41.432 "ns_data": { 00:13:41.432 "id": 1, 00:13:41.432 "can_share": true 00:13:41.432 } 00:13:41.432 } 00:13:41.432 ], 00:13:41.432 "mp_policy": "active_passive" 00:13:41.432 } 00:13:41.432 } 00:13:41.432 ] 00:13:41.432 20:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2641821 00:13:41.432 20:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:41.432 20:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:41.432 Running I/O for 10 seconds... 00:13:42.372 Latency(us) 00:13:42.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.372 Nvme0n1 : 1.00 22078.00 86.24 0.00 0.00 0.00 0.00 0.00 00:13:42.372 =================================================================================================================== 00:13:42.372 Total : 22078.00 86.24 0.00 0.00 0.00 0.00 0.00 00:13:42.372 00:13:43.305 20:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:43.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.562 Nvme0n1 : 2.00 22247.00 86.90 0.00 0.00 0.00 0.00 0.00 00:13:43.562 =================================================================================================================== 00:13:43.562 Total : 22247.00 86.90 0.00 0.00 0.00 0.00 0.00 00:13:43.562 00:13:43.562 true 00:13:43.562 20:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:43.562 20:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:43.820 20:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:43.820 20:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:43.820 20:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2641821 00:13:44.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.386 Nvme0n1 : 3.00 22306.00 87.13 0.00 0.00 0.00 0.00 0.00 00:13:44.386 =================================================================================================================== 00:13:44.386 Total : 22306.00 87.13 0.00 0.00 0.00 0.00 0.00 00:13:44.386 00:13:45.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.762 Nvme0n1 : 4.00 22367.50 87.37 0.00 0.00 0.00 0.00 0.00 00:13:45.762 =================================================================================================================== 00:13:45.762 Total : 22367.50 87.37 0.00 0.00 0.00 0.00 0.00 00:13:45.762 00:13:46.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:46.697 Nvme0n1 : 5.00 22335.60 87.25 0.00 0.00 0.00 0.00 0.00 00:13:46.697 =================================================================================================================== 00:13:46.697 Total : 22335.60 87.25 0.00 0.00 0.00 0.00 0.00 00:13:46.697 00:13:47.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.633 Nvme0n1 : 6.00 22382.33 87.43 0.00 0.00 0.00 0.00 0.00 00:13:47.633 =================================================================================================================== 00:13:47.633 Total : 22382.33 87.43 0.00 0.00 0.00 0.00 0.00 00:13:47.633 00:13:48.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.568 Nvme0n1 : 7.00 22420.29 87.58 0.00 0.00 0.00 0.00 0.00 00:13:48.568 =================================================================================================================== 00:13:48.568 Total : 22420.29 87.58 0.00 0.00 0.00 0.00 0.00 00:13:48.568 00:13:49.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.503 Nvme0n1 : 8.00 22448.75 87.69 0.00 0.00 0.00 0.00 0.00 00:13:49.503 =================================================================================================================== 00:13:49.503 Total : 22448.75 87.69 0.00 0.00 0.00 0.00 0.00 00:13:49.503 00:13:50.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.436 Nvme0n1 : 9.00 22473.56 87.79 0.00 0.00 0.00 0.00 0.00 00:13:50.436 =================================================================================================================== 00:13:50.436 Total : 22473.56 87.79 0.00 0.00 0.00 0.00 0.00 00:13:50.436 00:13:51.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.371 Nvme0n1 : 10.00 22494.20 87.87 0.00 0.00 0.00 0.00 0.00 00:13:51.371 =================================================================================================================== 00:13:51.371 Total : 22494.20 87.87 0.00 0.00 0.00 0.00 0.00 00:13:51.371 00:13:51.371 00:13:51.371 Latency(us) 00:13:51.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.371 Nvme0n1 : 10.01 22494.17 87.87 0.00 0.00 5686.29 4188.61 15386.71 00:13:51.371 =================================================================================================================== 00:13:51.371 Total : 22494.17 87.87 0.00 0.00 5686.29 4188.61 15386.71 00:13:51.371 0 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2641587 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2641587 ']' 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2641587 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2641587 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2641587' 00:13:51.630 killing process with pid 2641587 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2641587 00:13:51.630 Received shutdown signal, test time was about 10.000000 seconds 00:13:51.630 00:13:51.630 Latency(us) 00:13:51.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.630 =================================================================================================================== 00:13:51.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.630 20:39:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2641587 00:13:51.630 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:51.889 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:52.147 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:52.147 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2637966 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2637966 00:13:52.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2637966 Killed "${NVMF_APP[@]}" "$@" 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:52.405 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2643663 00:13:52.406 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2643663 00:13:52.406 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2643663 ']' 00:13:52.406 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.406 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.406 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.406 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.406 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:52.406 20:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:52.406 [2024-07-15 20:39:26.735301] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:13:52.406 [2024-07-15 20:39:26.735348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.406 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.406 [2024-07-15 20:39:26.792587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.406 [2024-07-15 20:39:26.871890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.406 [2024-07-15 20:39:26.871925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.406 [2024-07-15 20:39:26.871932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.406 [2024-07-15 20:39:26.871938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.406 [2024-07-15 20:39:26.871943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.406 [2024-07-15 20:39:26.871963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:53.341 [2024-07-15 20:39:27.721109] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:53.341 [2024-07-15 20:39:27.721190] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:53.341 [2024-07-15 20:39:27.721212] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7379b08c-52ad-4baa-94b7-eaaf87c3bf6a 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=7379b08c-52ad-4baa-94b7-eaaf87c3bf6a 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:53.341 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:53.599 20:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7379b08c-52ad-4baa-94b7-eaaf87c3bf6a -t 2000 00:13:53.599 [ 00:13:53.599 { 00:13:53.600 "name": "7379b08c-52ad-4baa-94b7-eaaf87c3bf6a", 00:13:53.600 "aliases": [ 00:13:53.600 "lvs/lvol" 00:13:53.600 ], 00:13:53.600 "product_name": "Logical Volume", 00:13:53.600 "block_size": 4096, 00:13:53.600 "num_blocks": 38912, 00:13:53.600 "uuid": "7379b08c-52ad-4baa-94b7-eaaf87c3bf6a", 00:13:53.600 "assigned_rate_limits": { 00:13:53.600 "rw_ios_per_sec": 0, 00:13:53.600 "rw_mbytes_per_sec": 0, 00:13:53.600 "r_mbytes_per_sec": 0, 00:13:53.600 "w_mbytes_per_sec": 0 00:13:53.600 }, 00:13:53.600 "claimed": false, 00:13:53.600 "zoned": false, 00:13:53.600 "supported_io_types": { 00:13:53.600 "read": true, 00:13:53.600 "write": true, 00:13:53.600 "unmap": true, 00:13:53.600 "flush": false, 00:13:53.600 "reset": true, 00:13:53.600 "nvme_admin": false, 00:13:53.600 "nvme_io": false, 00:13:53.600 "nvme_io_md": false, 00:13:53.600 "write_zeroes": true, 00:13:53.600 "zcopy": false, 00:13:53.600 "get_zone_info": false, 00:13:53.600 "zone_management": false, 00:13:53.600 "zone_append": false, 00:13:53.600 "compare": false, 00:13:53.600 "compare_and_write": false, 00:13:53.600 "abort": false, 00:13:53.600 "seek_hole": true, 00:13:53.600 "seek_data": true, 00:13:53.600 "copy": false, 00:13:53.600 "nvme_iov_md": false 00:13:53.600 }, 00:13:53.600 "driver_specific": { 00:13:53.600 "lvol": { 00:13:53.600 "lvol_store_uuid": "e18a761b-fb9b-4b6d-9229-0468f1abf23c", 00:13:53.600 "base_bdev": "aio_bdev", 00:13:53.600 "thin_provision": false, 00:13:53.600 "num_allocated_clusters": 38, 00:13:53.600 "snapshot": false, 00:13:53.600 "clone": false, 00:13:53.600 "esnap_clone": false 00:13:53.600 } 00:13:53.600 } 00:13:53.600 } 00:13:53.600 ] 00:13:53.858 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:53.858 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:53.858 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:53.858 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:53.858 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:53.858 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:54.116 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:54.116 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:54.375 [2024-07-15 20:39:28.601590] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:54.375 request: 00:13:54.375 { 00:13:54.375 "uuid": "e18a761b-fb9b-4b6d-9229-0468f1abf23c", 00:13:54.375 "method": "bdev_lvol_get_lvstores", 00:13:54.375 "req_id": 1 00:13:54.375 } 00:13:54.375 Got JSON-RPC error response 00:13:54.375 response: 00:13:54.375 { 00:13:54.375 "code": -19, 00:13:54.375 "message": "No such device" 00:13:54.375 } 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:54.375 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:54.634 aio_bdev 00:13:54.634 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7379b08c-52ad-4baa-94b7-eaaf87c3bf6a 00:13:54.634 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=7379b08c-52ad-4baa-94b7-eaaf87c3bf6a 00:13:54.634 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:54.634 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:54.634 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:54.634 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:54.634 20:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:54.893 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7379b08c-52ad-4baa-94b7-eaaf87c3bf6a -t 2000 00:13:54.893 [ 00:13:54.893 { 00:13:54.893 "name": "7379b08c-52ad-4baa-94b7-eaaf87c3bf6a", 00:13:54.893 "aliases": [ 00:13:54.893 "lvs/lvol" 00:13:54.893 ], 00:13:54.893 "product_name": "Logical Volume", 00:13:54.893 "block_size": 4096, 00:13:54.893 "num_blocks": 38912, 00:13:54.893 "uuid": "7379b08c-52ad-4baa-94b7-eaaf87c3bf6a", 00:13:54.893 "assigned_rate_limits": { 00:13:54.893 "rw_ios_per_sec": 0, 00:13:54.893 "rw_mbytes_per_sec": 0, 00:13:54.893 "r_mbytes_per_sec": 0, 00:13:54.893 "w_mbytes_per_sec": 0 00:13:54.893 }, 00:13:54.893 "claimed": false, 00:13:54.893 "zoned": false, 00:13:54.893 "supported_io_types": { 00:13:54.893 "read": true, 00:13:54.893 "write": true, 00:13:54.893 "unmap": true, 00:13:54.893 "flush": false, 00:13:54.893 "reset": true, 00:13:54.893 "nvme_admin": false, 00:13:54.893 "nvme_io": false, 00:13:54.893 "nvme_io_md": false, 00:13:54.893 "write_zeroes": true, 00:13:54.893 "zcopy": false, 00:13:54.893 "get_zone_info": false, 00:13:54.893 "zone_management": false, 00:13:54.893 "zone_append": false, 00:13:54.893 "compare": false, 00:13:54.893 "compare_and_write": false, 00:13:54.893 "abort": false, 00:13:54.893 "seek_hole": true, 00:13:54.893 "seek_data": true, 00:13:54.893 "copy": false, 00:13:54.893 "nvme_iov_md": false 00:13:54.893 }, 00:13:54.893 "driver_specific": { 00:13:54.893 "lvol": { 00:13:54.893 "lvol_store_uuid": "e18a761b-fb9b-4b6d-9229-0468f1abf23c", 00:13:54.893 "base_bdev": "aio_bdev", 00:13:54.893 "thin_provision": false, 00:13:54.893 "num_allocated_clusters": 38, 00:13:54.893 "snapshot": false, 00:13:54.893 "clone": false, 00:13:54.893 "esnap_clone": false 00:13:54.893 } 00:13:54.893 } 00:13:54.893 } 00:13:54.893 ] 00:13:54.893 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:54.893 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:54.893 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:55.186 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:55.186 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:55.186 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:55.186 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:55.186 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7379b08c-52ad-4baa-94b7-eaaf87c3bf6a 00:13:55.444 20:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e18a761b-fb9b-4b6d-9229-0468f1abf23c 00:13:55.703 20:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.963 00:13:55.963 real 0m17.524s 00:13:55.963 user 0m44.706s 00:13:55.963 sys 0m4.198s 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:55.963 ************************************ 00:13:55.963 END TEST lvs_grow_dirty 00:13:55.963 ************************************ 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:55.963 nvmf_trace.0 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.963 rmmod nvme_tcp 00:13:55.963 rmmod nvme_fabrics 00:13:55.963 rmmod nvme_keyring 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2643663 ']' 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2643663 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2643663 ']' 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2643663 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2643663 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2643663' 00:13:55.963 killing process with pid 2643663 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2643663 00:13:55.963 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2643663 00:13:56.223 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.223 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.223 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.223 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.223 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.223 20:39:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.223 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.223 20:39:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.757 20:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:58.757 00:13:58.757 real 0m42.347s 00:13:58.757 user 1m5.887s 00:13:58.757 sys 0m10.111s 00:13:58.757 20:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.757 20:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:58.757 ************************************ 00:13:58.757 END TEST nvmf_lvs_grow 00:13:58.757 ************************************ 00:13:58.757 20:39:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:58.757 20:39:32 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:58.757 20:39:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:58.757 20:39:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.757 20:39:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.757 ************************************ 00:13:58.757 START TEST nvmf_bdev_io_wait 00:13:58.757 ************************************ 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:58.757 * Looking for test storage... 00:13:58.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.757 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.758 20:39:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.023 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:04.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:04.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:04.024 Found net devices under 0000:86:00.0: cvl_0_0 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:04.024 Found net devices under 0000:86:00.1: cvl_0_1 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.024 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:14:04.051 00:14:04.051 --- 10.0.0.2 ping statistics --- 00:14:04.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.051 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:14:04.051 00:14:04.051 --- 10.0.0.1 ping statistics --- 00:14:04.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.051 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.051 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2647931 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2647931 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2647931 ']' 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.310 20:39:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:04.310 [2024-07-15 20:39:38.574013] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:04.310 [2024-07-15 20:39:38.574055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.310 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.310 [2024-07-15 20:39:38.630082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.310 [2024-07-15 20:39:38.709039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.310 [2024-07-15 20:39:38.709079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.310 [2024-07-15 20:39:38.709086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.310 [2024-07-15 20:39:38.709092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.310 [2024-07-15 20:39:38.709097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.310 [2024-07-15 20:39:38.709143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.310 [2024-07-15 20:39:38.709250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.310 [2024-07-15 20:39:38.709314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.311 [2024-07-15 20:39:38.709316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 [2024-07-15 20:39:39.479647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 Malloc0 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 [2024-07-15 20:39:39.538862] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2648059 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2648062 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:05.247 { 00:14:05.247 "params": { 00:14:05.247 "name": "Nvme$subsystem", 00:14:05.247 "trtype": "$TEST_TRANSPORT", 00:14:05.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.247 "adrfam": "ipv4", 00:14:05.247 "trsvcid": "$NVMF_PORT", 00:14:05.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.247 "hdgst": ${hdgst:-false}, 00:14:05.247 "ddgst": ${ddgst:-false} 00:14:05.247 }, 00:14:05.247 "method": "bdev_nvme_attach_controller" 00:14:05.247 } 00:14:05.247 EOF 00:14:05.247 )") 00:14:05.247 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2648065 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:05.248 { 00:14:05.248 "params": { 00:14:05.248 "name": "Nvme$subsystem", 00:14:05.248 "trtype": "$TEST_TRANSPORT", 00:14:05.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.248 "adrfam": "ipv4", 00:14:05.248 "trsvcid": "$NVMF_PORT", 00:14:05.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.248 "hdgst": ${hdgst:-false}, 00:14:05.248 "ddgst": ${ddgst:-false} 00:14:05.248 }, 00:14:05.248 "method": "bdev_nvme_attach_controller" 00:14:05.248 } 00:14:05.248 EOF 00:14:05.248 )") 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2648069 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:05.248 { 00:14:05.248 "params": { 00:14:05.248 "name": "Nvme$subsystem", 00:14:05.248 "trtype": "$TEST_TRANSPORT", 00:14:05.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.248 "adrfam": "ipv4", 00:14:05.248 "trsvcid": "$NVMF_PORT", 00:14:05.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.248 "hdgst": ${hdgst:-false}, 00:14:05.248 "ddgst": ${ddgst:-false} 00:14:05.248 }, 00:14:05.248 "method": "bdev_nvme_attach_controller" 00:14:05.248 } 00:14:05.248 EOF 00:14:05.248 )") 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:05.248 { 00:14:05.248 "params": { 00:14:05.248 "name": "Nvme$subsystem", 00:14:05.248 "trtype": "$TEST_TRANSPORT", 00:14:05.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.248 "adrfam": "ipv4", 00:14:05.248 "trsvcid": "$NVMF_PORT", 00:14:05.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.248 "hdgst": ${hdgst:-false}, 00:14:05.248 "ddgst": ${ddgst:-false} 00:14:05.248 }, 00:14:05.248 "method": "bdev_nvme_attach_controller" 00:14:05.248 } 00:14:05.248 EOF 00:14:05.248 )") 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2648059 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:05.248 "params": { 00:14:05.248 "name": "Nvme1", 00:14:05.248 "trtype": "tcp", 00:14:05.248 "traddr": "10.0.0.2", 00:14:05.248 "adrfam": "ipv4", 00:14:05.248 "trsvcid": "4420", 00:14:05.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.248 "hdgst": false, 00:14:05.248 "ddgst": false 00:14:05.248 }, 00:14:05.248 "method": "bdev_nvme_attach_controller" 00:14:05.248 }' 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:05.248 "params": { 00:14:05.248 "name": "Nvme1", 00:14:05.248 "trtype": "tcp", 00:14:05.248 "traddr": "10.0.0.2", 00:14:05.248 "adrfam": "ipv4", 00:14:05.248 "trsvcid": "4420", 00:14:05.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.248 "hdgst": false, 00:14:05.248 "ddgst": false 00:14:05.248 }, 00:14:05.248 "method": "bdev_nvme_attach_controller" 00:14:05.248 }' 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:05.248 "params": { 00:14:05.248 "name": "Nvme1", 00:14:05.248 "trtype": "tcp", 00:14:05.248 "traddr": "10.0.0.2", 00:14:05.248 "adrfam": "ipv4", 00:14:05.248 "trsvcid": "4420", 00:14:05.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.248 "hdgst": false, 00:14:05.248 "ddgst": false 00:14:05.248 }, 00:14:05.248 "method": "bdev_nvme_attach_controller" 00:14:05.248 }' 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:05.248 20:39:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:05.248 "params": { 00:14:05.248 "name": "Nvme1", 00:14:05.248 "trtype": "tcp", 00:14:05.248 "traddr": "10.0.0.2", 00:14:05.248 "adrfam": "ipv4", 00:14:05.248 "trsvcid": "4420", 00:14:05.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.248 "hdgst": false, 00:14:05.248 "ddgst": false 00:14:05.248 }, 00:14:05.248 "method": "bdev_nvme_attach_controller" 00:14:05.248 }' 00:14:05.248 [2024-07-15 20:39:39.587932] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:05.248 [2024-07-15 20:39:39.587979] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:05.248 [2024-07-15 20:39:39.590993] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:05.248 [2024-07-15 20:39:39.591042] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:05.248 [2024-07-15 20:39:39.592408] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:05.248 [2024-07-15 20:39:39.592449] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:05.248 [2024-07-15 20:39:39.593446] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:05.248 [2024-07-15 20:39:39.593491] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:05.248 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.248 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.508 [2024-07-15 20:39:39.765377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.508 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.508 [2024-07-15 20:39:39.843115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:05.508 [2024-07-15 20:39:39.877497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.508 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.508 [2024-07-15 20:39:39.926404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.508 [2024-07-15 20:39:39.969522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.508 [2024-07-15 20:39:39.973119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:05.766 [2024-07-15 20:39:40.002375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:05.766 [2024-07-15 20:39:40.049286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:05.766 Running I/O for 1 seconds... 00:14:05.766 Running I/O for 1 seconds... 00:14:05.766 Running I/O for 1 seconds... 00:14:06.024 Running I/O for 1 seconds... 00:14:06.960 00:14:06.960 Latency(us) 00:14:06.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.960 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:06.960 Nvme1n1 : 1.00 246091.95 961.30 0.00 0.00 517.49 207.47 683.85 00:14:06.960 =================================================================================================================== 00:14:06.960 Total : 246091.95 961.30 0.00 0.00 517.49 207.47 683.85 00:14:06.960 00:14:06.960 Latency(us) 00:14:06.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.960 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:06.960 Nvme1n1 : 1.01 8572.33 33.49 0.00 0.00 14847.03 6154.69 24732.72 00:14:06.960 =================================================================================================================== 00:14:06.960 Total : 8572.33 33.49 0.00 0.00 14847.03 6154.69 24732.72 00:14:06.960 00:14:06.960 Latency(us) 00:14:06.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.960 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:06.960 Nvme1n1 : 1.01 11284.71 44.08 0.00 0.00 11294.34 7693.36 21427.42 00:14:06.960 =================================================================================================================== 00:14:06.960 Total : 11284.71 44.08 0.00 0.00 11294.34 7693.36 21427.42 00:14:06.960 00:14:06.960 Latency(us) 00:14:06.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.960 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:06.960 Nvme1n1 : 1.00 9489.36 37.07 0.00 0.00 13461.53 3590.23 39891.48 00:14:06.960 =================================================================================================================== 00:14:06.960 Total : 9489.36 37.07 0.00 0.00 13461.53 3590.23 39891.48 00:14:06.960 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2648062 00:14:06.960 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2648065 00:14:06.960 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2648069 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.219 rmmod nvme_tcp 00:14:07.219 rmmod nvme_fabrics 00:14:07.219 rmmod nvme_keyring 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2647931 ']' 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2647931 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2647931 ']' 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2647931 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2647931 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2647931' 00:14:07.219 killing process with pid 2647931 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2647931 00:14:07.219 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2647931 00:14:07.478 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.478 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.478 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.478 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.478 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.478 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.478 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.478 20:39:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.014 20:39:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.014 00:14:10.014 real 0m11.174s 00:14:10.014 user 0m19.501s 00:14:10.014 sys 0m5.938s 00:14:10.014 20:39:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.014 20:39:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.014 ************************************ 00:14:10.014 END TEST nvmf_bdev_io_wait 00:14:10.014 ************************************ 00:14:10.014 20:39:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:10.014 20:39:43 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:10.014 20:39:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.014 20:39:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.014 20:39:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.014 ************************************ 00:14:10.014 START TEST nvmf_queue_depth 00:14:10.014 ************************************ 00:14:10.014 20:39:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:10.014 * Looking for test storage... 00:14:10.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.014 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.015 20:39:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:15.281 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:15.281 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:15.281 Found net devices under 0000:86:00.0: cvl_0_0 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:15.281 Found net devices under 0000:86:00.1: cvl_0_1 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:15.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:14:15.281 00:14:15.281 --- 10.0.0.2 ping statistics --- 00:14:15.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.281 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:14:15.281 00:14:15.281 --- 10.0.0.1 ping statistics --- 00:14:15.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.281 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:15.281 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2651849 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2651849 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2651849 ']' 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.282 20:39:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.282 [2024-07-15 20:39:49.380740] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:15.282 [2024-07-15 20:39:49.380783] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.282 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.282 [2024-07-15 20:39:49.438447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.282 [2024-07-15 20:39:49.517047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.282 [2024-07-15 20:39:49.517081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.282 [2024-07-15 20:39:49.517088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.282 [2024-07-15 20:39:49.517094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.282 [2024-07-15 20:39:49.517099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.282 [2024-07-15 20:39:49.517117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.847 [2024-07-15 20:39:50.216322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.847 Malloc0 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.847 [2024-07-15 20:39:50.277320] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.847 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2651980 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2651980 /var/tmp/bdevperf.sock 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2651980 ']' 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:15.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.848 20:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:15.848 [2024-07-15 20:39:50.326995] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:15.848 [2024-07-15 20:39:50.327037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2651980 ] 00:14:16.107 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.107 [2024-07-15 20:39:50.381181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.107 [2024-07-15 20:39:50.461026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.674 20:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.674 20:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:16.674 20:39:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:16.674 20:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.674 20:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.933 NVMe0n1 00:14:16.933 20:39:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.933 20:39:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.933 Running I/O for 10 seconds... 00:14:26.940 00:14:26.940 Latency(us) 00:14:26.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.940 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:26.940 Verification LBA range: start 0x0 length 0x4000 00:14:26.940 NVMe0n1 : 10.05 12217.16 47.72 0.00 0.00 83560.02 19717.79 59267.34 00:14:26.940 =================================================================================================================== 00:14:26.940 Total : 12217.16 47.72 0.00 0.00 83560.02 19717.79 59267.34 00:14:26.940 0 00:14:26.940 20:40:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2651980 00:14:26.940 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2651980 ']' 00:14:26.940 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2651980 00:14:26.940 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:26.940 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.940 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2651980 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2651980' 00:14:27.198 killing process with pid 2651980 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2651980 00:14:27.198 Received shutdown signal, test time was about 10.000000 seconds 00:14:27.198 00:14:27.198 Latency(us) 00:14:27.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.198 =================================================================================================================== 00:14:27.198 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2651980 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.198 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:27.199 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.199 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.199 rmmod nvme_tcp 00:14:27.199 rmmod nvme_fabrics 00:14:27.199 rmmod nvme_keyring 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2651849 ']' 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2651849 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2651849 ']' 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2651849 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2651849 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2651849' 00:14:27.458 killing process with pid 2651849 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2651849 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2651849 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.458 20:40:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.991 20:40:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:29.991 00:14:29.991 real 0m20.027s 00:14:29.991 user 0m24.643s 00:14:29.991 sys 0m5.557s 00:14:29.991 20:40:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.991 20:40:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:29.991 ************************************ 00:14:29.991 END TEST nvmf_queue_depth 00:14:29.991 ************************************ 00:14:29.991 20:40:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:29.991 20:40:04 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:29.991 20:40:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.991 20:40:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.991 20:40:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.991 ************************************ 00:14:29.991 START TEST nvmf_target_multipath 00:14:29.991 ************************************ 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:29.991 * Looking for test storage... 00:14:29.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.991 20:40:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:35.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:35.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:35.254 Found net devices under 0000:86:00.0: cvl_0_0 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:35.254 Found net devices under 0000:86:00.1: cvl_0_1 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:35.254 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.255 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.255 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.255 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.255 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:35.255 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:35.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:14:35.513 00:14:35.513 --- 10.0.0.2 ping statistics --- 00:14:35.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.513 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:14:35.513 00:14:35.513 --- 10.0.0.1 ping statistics --- 00:14:35.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.513 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:35.513 only one NIC for nvmf test 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.513 rmmod nvme_tcp 00:14:35.513 rmmod nvme_fabrics 00:14:35.513 rmmod nvme_keyring 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.513 20:40:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.043 20:40:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.044 20:40:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.044 20:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.044 00:14:38.044 real 0m7.882s 00:14:38.044 user 0m1.614s 00:14:38.044 sys 0m4.243s 00:14:38.044 20:40:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.044 20:40:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:38.044 ************************************ 00:14:38.044 END TEST nvmf_target_multipath 00:14:38.044 ************************************ 00:14:38.044 20:40:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:38.044 20:40:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:38.044 20:40:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.044 20:40:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.044 20:40:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.044 ************************************ 00:14:38.044 START TEST nvmf_zcopy 00:14:38.044 ************************************ 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:38.044 * Looking for test storage... 00:14:38.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.044 20:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:43.310 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:43.310 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:43.310 Found net devices under 0000:86:00.0: cvl_0_0 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:43.310 Found net devices under 0000:86:00.1: cvl_0_1 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.310 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:14:43.311 00:14:43.311 --- 10.0.0.2 ping statistics --- 00:14:43.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.311 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:14:43.311 00:14:43.311 --- 10.0.0.1 ping statistics --- 00:14:43.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.311 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2660815 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2660815 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2660815 ']' 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.311 20:40:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.311 [2024-07-15 20:40:17.364073] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:43.311 [2024-07-15 20:40:17.364118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.311 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.311 [2024-07-15 20:40:17.419007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.311 [2024-07-15 20:40:17.489585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.311 [2024-07-15 20:40:17.489624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.311 [2024-07-15 20:40:17.489631] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.311 [2024-07-15 20:40:17.489637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.311 [2024-07-15 20:40:17.489642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.311 [2024-07-15 20:40:17.489674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.879 [2024-07-15 20:40:18.196531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.879 [2024-07-15 20:40:18.216694] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.879 malloc0 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:43.879 { 00:14:43.879 "params": { 00:14:43.879 "name": "Nvme$subsystem", 00:14:43.879 "trtype": "$TEST_TRANSPORT", 00:14:43.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:43.879 "adrfam": "ipv4", 00:14:43.879 "trsvcid": "$NVMF_PORT", 00:14:43.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:43.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:43.879 "hdgst": ${hdgst:-false}, 00:14:43.879 "ddgst": ${ddgst:-false} 00:14:43.879 }, 00:14:43.879 "method": "bdev_nvme_attach_controller" 00:14:43.879 } 00:14:43.879 EOF 00:14:43.879 )") 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:43.879 20:40:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:43.879 "params": { 00:14:43.879 "name": "Nvme1", 00:14:43.879 "trtype": "tcp", 00:14:43.879 "traddr": "10.0.0.2", 00:14:43.879 "adrfam": "ipv4", 00:14:43.879 "trsvcid": "4420", 00:14:43.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.879 "hdgst": false, 00:14:43.879 "ddgst": false 00:14:43.879 }, 00:14:43.879 "method": "bdev_nvme_attach_controller" 00:14:43.879 }' 00:14:43.879 [2024-07-15 20:40:18.296722] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:43.879 [2024-07-15 20:40:18.296764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660873 ] 00:14:43.879 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.879 [2024-07-15 20:40:18.349390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.139 [2024-07-15 20:40:18.426246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.398 Running I/O for 10 seconds... 00:14:54.419 00:14:54.420 Latency(us) 00:14:54.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.420 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:54.420 Verification LBA range: start 0x0 length 0x1000 00:14:54.420 Nvme1n1 : 10.01 8666.86 67.71 0.00 0.00 14726.27 2550.21 27582.11 00:14:54.420 =================================================================================================================== 00:14:54.420 Total : 8666.86 67.71 0.00 0.00 14726.27 2550.21 27582.11 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2662699 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.420 { 00:14:54.420 "params": { 00:14:54.420 "name": "Nvme$subsystem", 00:14:54.420 "trtype": "$TEST_TRANSPORT", 00:14:54.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.420 "adrfam": "ipv4", 00:14:54.420 "trsvcid": "$NVMF_PORT", 00:14:54.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.420 "hdgst": ${hdgst:-false}, 00:14:54.420 "ddgst": ${ddgst:-false} 00:14:54.420 }, 00:14:54.420 "method": "bdev_nvme_attach_controller" 00:14:54.420 } 00:14:54.420 EOF 00:14:54.420 )") 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:54.420 [2024-07-15 20:40:28.845700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.420 [2024-07-15 20:40:28.845738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:54.420 20:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.420 "params": { 00:14:54.420 "name": "Nvme1", 00:14:54.420 "trtype": "tcp", 00:14:54.420 "traddr": "10.0.0.2", 00:14:54.420 "adrfam": "ipv4", 00:14:54.420 "trsvcid": "4420", 00:14:54.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.420 "hdgst": false, 00:14:54.420 "ddgst": false 00:14:54.420 }, 00:14:54.420 "method": "bdev_nvme_attach_controller" 00:14:54.420 }' 00:14:54.420 [2024-07-15 20:40:28.857697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.420 [2024-07-15 20:40:28.857710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.420 [2024-07-15 20:40:28.869721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.420 [2024-07-15 20:40:28.869731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.420 [2024-07-15 20:40:28.881755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.420 [2024-07-15 20:40:28.881764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.420 [2024-07-15 20:40:28.885103] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:14:54.420 [2024-07-15 20:40:28.885141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662699 ] 00:14:54.420 [2024-07-15 20:40:28.893787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.420 [2024-07-15 20:40:28.893796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:28.905818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:28.905827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.679 [2024-07-15 20:40:28.917848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:28.917856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:28.929883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:28.929891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:28.939312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.679 [2024-07-15 20:40:28.941913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:28.941923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:28.953947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:28.953959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:28.965977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:28.965987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:28.978011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:28.978028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:28.990043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:28.990058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.002073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.002083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.014109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.014124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.015053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.679 [2024-07-15 20:40:29.026147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.026162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.038179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.038196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.050212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.050223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.062246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.062256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.074279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.074290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.086306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.086314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.098335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.098343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.110387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.110408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.122408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.122420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.134440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.134453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.146471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.146491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.679 [2024-07-15 20:40:29.158501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.679 [2024-07-15 20:40:29.158510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.170539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.170552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.182575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.182590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.194602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.194616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.206643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.206659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 Running I/O for 5 seconds... 00:14:54.938 [2024-07-15 20:40:29.218667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.218676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.231105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.231123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.239940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.239959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.248796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.248815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.263375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.938 [2024-07-15 20:40:29.263394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.938 [2024-07-15 20:40:29.276939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.276957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.284509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.284526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.294659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.294676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.303339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.303356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.311908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.311926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.326485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.326503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.340534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.340552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.347861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.347878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.361263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.361282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.375467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.375486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.386569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.386588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.395219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.395241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.403997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.404014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.939 [2024-07-15 20:40:29.418482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.939 [2024-07-15 20:40:29.418500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.427599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.427617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.441934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.441952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.455882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.455900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.464945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.464963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.479510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.479528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.493590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.493607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.509148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.509167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.518013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.518031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.527232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.527266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.536396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.536414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.545147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.545165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.559873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.559891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.571522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.571539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.585609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.585626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.600074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.600094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.610865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.610883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.625404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.625425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.639352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.639371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.653389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.653408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.662210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.662236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.197 [2024-07-15 20:40:29.670938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.197 [2024-07-15 20:40:29.670959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.685346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.685365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.699061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.699079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.708165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.708183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.717303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.717321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.726478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.726496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.740853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.740871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.749802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.749819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.764169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.764187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.773483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.773501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.782278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.782296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.796924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.796942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.804511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.804530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.818345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.818365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.832448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.832467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.846938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.846957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.862086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.862104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.870918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.870936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.880076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.880094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.894704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.894725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.455 [2024-07-15 20:40:29.903562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.455 [2024-07-15 20:40:29.903580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.456 [2024-07-15 20:40:29.913002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.456 [2024-07-15 20:40:29.913020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.456 [2024-07-15 20:40:29.927698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.456 [2024-07-15 20:40:29.927716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:29.938930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:29.938947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:29.947934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:29.947952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:29.956691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:29.956708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:29.971505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:29.971524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:29.985400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:29.985419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:29.994375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:29.994393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.003493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.003511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.012047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.012064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.027843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.027863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.043279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.043298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.052189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.052207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.066409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.066427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.080620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.080638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.094525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.094542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.108602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.108620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.119141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.119163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.128007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.128029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.136744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.136762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.150992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.151010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.165068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.165085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.172612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.172630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.180109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.180127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.714 [2024-07-15 20:40:30.190223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.714 [2024-07-15 20:40:30.190247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.204282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.204300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.218142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.218161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.232412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.232431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.241331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.241349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.250020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.250038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.264399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.264417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.277936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.277955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.291920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.291940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.305750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.305768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.319887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.319905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.333835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.333852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.347659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.971 [2024-07-15 20:40:30.347681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.971 [2024-07-15 20:40:30.361584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.972 [2024-07-15 20:40:30.361602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.972 [2024-07-15 20:40:30.370503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.972 [2024-07-15 20:40:30.370521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.972 [2024-07-15 20:40:30.384591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.972 [2024-07-15 20:40:30.384609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.972 [2024-07-15 20:40:30.398127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.972 [2024-07-15 20:40:30.398145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.972 [2024-07-15 20:40:30.407301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.972 [2024-07-15 20:40:30.407319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.972 [2024-07-15 20:40:30.421934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.972 [2024-07-15 20:40:30.421952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.972 [2024-07-15 20:40:30.435585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.972 [2024-07-15 20:40:30.435602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.972 [2024-07-15 20:40:30.450285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.972 [2024-07-15 20:40:30.450302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.229 [2024-07-15 20:40:30.465580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.229 [2024-07-15 20:40:30.465597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.229 [2024-07-15 20:40:30.474376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.229 [2024-07-15 20:40:30.474394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.229 [2024-07-15 20:40:30.488565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.488583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.497492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.497510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.506440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.506458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.520851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.520869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.529995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.530012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.539378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.539396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.554096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.554113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.565257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.565275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.579567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.579585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.593468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.593486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.602253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.602271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.610815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.610832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.620113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.620131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.634546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.634564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.643440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.643457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.652685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.652702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.662325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.662343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.671108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.671125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.685608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.685626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.699445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.699463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.230 [2024-07-15 20:40:30.708493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.230 [2024-07-15 20:40:30.708510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.487 [2024-07-15 20:40:30.723186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.487 [2024-07-15 20:40:30.723204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.487 [2024-07-15 20:40:30.734095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.487 [2024-07-15 20:40:30.734113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.487 [2024-07-15 20:40:30.748547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.487 [2024-07-15 20:40:30.748564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.487 [2024-07-15 20:40:30.757312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.487 [2024-07-15 20:40:30.757329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.487 [2024-07-15 20:40:30.772085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.487 [2024-07-15 20:40:30.772103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.487 [2024-07-15 20:40:30.787703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.487 [2024-07-15 20:40:30.787720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.487 [2024-07-15 20:40:30.801580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.487 [2024-07-15 20:40:30.801598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.815240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.815274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.829574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.829592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.840730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.840748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.848931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.848949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.857591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.857610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.872116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.872134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.885790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.885808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.894622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.894639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.903237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.903254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.918083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.918100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.933497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.933516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.942339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.942356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.951601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.951620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.960287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.960305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.488 [2024-07-15 20:40:30.969796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.488 [2024-07-15 20:40:30.969814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:30.984417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:30.984436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:30.998216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:30.998244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.012197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.012215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.021346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.021365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.030171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.030191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.045101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.045120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.052689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.052707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.061784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.061802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.070988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.071006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.080158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.080176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.094959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.094977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.110800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.110819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.119690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.119712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.129064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.129082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.143684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.143703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.157317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.157335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.166480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.166499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.175990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.176008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.184822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.184840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.199707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.199725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.214982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.215001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.746 [2024-07-15 20:40:31.223945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.746 [2024-07-15 20:40:31.223963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.003 [2024-07-15 20:40:31.238426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.003 [2024-07-15 20:40:31.238445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.003 [2024-07-15 20:40:31.249577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.003 [2024-07-15 20:40:31.249595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.003 [2024-07-15 20:40:31.258453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.003 [2024-07-15 20:40:31.258471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.272863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.272882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.281698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.281716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.296098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.296117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.304895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.304914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.313715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.313734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.328365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.328385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.337179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.337198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.346137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.346155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.355290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.355309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.364469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.364487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.379312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.379330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.389996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.390015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.399316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.399333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.414079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.414097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.425056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.425073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.439453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.439486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.452922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.452940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.461750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.461768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.470293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.470310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.004 [2024-07-15 20:40:31.484937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.004 [2024-07-15 20:40:31.484956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.497989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.498007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.512556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.512574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.522994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.523012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.537409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.537427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.545015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.545033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.558948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.558966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.572813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.572831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.581848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.581865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.596112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.596130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.609875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.609892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.624343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.624372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.635182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.635200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.649775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.649793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.663327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.663345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.672117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.672139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.686459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.686477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.695438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.695456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.710136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.710153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.725579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.725598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.262 [2024-07-15 20:40:31.739579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.262 [2024-07-15 20:40:31.739597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.753390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.753409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.767674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.767691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.778877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.778895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.787516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.787533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.796341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.796359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.810762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.810780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.819573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.819590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.828406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.828424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.837545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.837562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.852140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.852158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.863519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.863537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.872145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.872162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.887081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.887099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.902126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.521 [2024-07-15 20:40:31.902147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.521 [2024-07-15 20:40:31.911076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.911093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.522 [2024-07-15 20:40:31.925395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.925413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.522 [2024-07-15 20:40:31.939081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.939099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.522 [2024-07-15 20:40:31.948302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.948320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.522 [2024-07-15 20:40:31.957376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.957393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.522 [2024-07-15 20:40:31.966806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.966824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.522 [2024-07-15 20:40:31.981525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.981542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.522 [2024-07-15 20:40:31.989308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.989325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.522 [2024-07-15 20:40:31.998337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.522 [2024-07-15 20:40:31.998355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.006848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.006867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.016425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.016444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.030953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.030971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.039446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.039463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.053966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.053984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.067448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.067465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.076246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.076280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.090742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.090760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.099489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.099506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.113971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.113993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.127853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.127871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.141374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.141392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.155585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.155603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.167050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.167068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.176041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.176059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.780 [2024-07-15 20:40:32.190530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.780 [2024-07-15 20:40:32.190548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.781 [2024-07-15 20:40:32.204352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.781 [2024-07-15 20:40:32.204370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.781 [2024-07-15 20:40:32.218482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.781 [2024-07-15 20:40:32.218499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.781 [2024-07-15 20:40:32.227708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.781 [2024-07-15 20:40:32.227727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.781 [2024-07-15 20:40:32.236911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.781 [2024-07-15 20:40:32.236930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.781 [2024-07-15 20:40:32.245583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.781 [2024-07-15 20:40:32.245601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.781 [2024-07-15 20:40:32.254450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.781 [2024-07-15 20:40:32.254467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.268804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.268822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.277630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.277647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.287093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.287110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.296205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.296222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.305635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.305652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.319927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.319944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.333870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.333888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.347904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.347922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.362376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.362395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.377487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.377506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.391634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.391653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.405122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.405141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.414150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.414168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.428810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.428830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.439588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.439608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.454237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.454257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.463449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.463467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.477976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.477994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.488927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.488946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.496263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.496281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.510563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.510581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.525036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.525055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.063 [2024-07-15 20:40:32.538586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.063 [2024-07-15 20:40:32.538603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.547554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.547572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.556186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.556204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.570661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.570679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.579627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.579645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.588934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.588951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.597631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.597649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.612117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.612135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.626096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.626117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.639968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.639987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.654130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.654148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.663036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.663055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.671889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.671907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.686422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.686440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.700163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.700181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.709128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.709146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.717881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.717899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.727216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.727243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.742112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.742130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.757389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.757407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.766581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.322 [2024-07-15 20:40:32.766599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.322 [2024-07-15 20:40:32.780843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.323 [2024-07-15 20:40:32.780861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.323 [2024-07-15 20:40:32.794904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.323 [2024-07-15 20:40:32.794923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.808649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.808667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.822992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.823010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.838566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.838584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.846304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.846321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.855531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.855548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.870235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.870254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.881569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.881587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.895977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.895995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.908945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.908963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.917946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.917964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.933009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.933027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.948022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.948040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.961861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.961879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.970976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.970994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:32.985899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:32.985916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:33.001238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:33.001256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:33.015168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:33.015185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:33.024157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:33.024175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:33.038647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:33.038665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:33.046218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:33.046240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.581 [2024-07-15 20:40:33.056705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.581 [2024-07-15 20:40:33.056722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.071028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.071047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.082694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.082711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.091968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.091986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.100653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.100671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.115663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.115680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.126577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.126594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.135923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.135941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.144638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.144655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.159209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.159232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.170931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.170948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.179610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.179627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.189570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.189588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.203903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.203921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.217020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.217038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.231082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.231099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.238874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.238896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.247961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.247979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.262453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.262471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.271253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.271270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.285933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.285950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.293659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.293681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.307223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.307246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.840 [2024-07-15 20:40:33.316132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.840 [2024-07-15 20:40:33.316149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.325657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.325675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.340076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.340093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.347895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.347912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.357257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.357290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.371795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.371813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.385428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.385446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.399847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.399864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.415040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.415057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.423831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.423848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.433188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.433206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.442472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.442500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.457266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.457288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.468475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.468493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.483318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.483336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.498975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.498992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.507736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.507754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.522696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.522713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.537876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.099 [2024-07-15 20:40:33.537894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.099 [2024-07-15 20:40:33.546875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.100 [2024-07-15 20:40:33.546893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.100 [2024-07-15 20:40:33.554011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.100 [2024-07-15 20:40:33.554028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.100 [2024-07-15 20:40:33.564640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.100 [2024-07-15 20:40:33.564657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.100 [2024-07-15 20:40:33.579327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.100 [2024-07-15 20:40:33.579346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.586911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.586928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.595908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.595926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.611212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.611235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.625972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.625989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.640141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.640159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.653659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.653677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.662720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.662738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.677439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.677457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.685160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.685181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.698508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.698526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.707689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.707707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.722418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.722436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.733825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.733845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.748407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.748426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.761960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.761979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.776098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.776115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.784945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.784963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.793672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.793691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.803178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.803196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.817861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.817880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.831694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.831715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.359 [2024-07-15 20:40:33.838906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.359 [2024-07-15 20:40:33.838925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.852727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.852746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.861594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.861613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.875963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.875982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.888984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.889002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.897780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.897798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.906640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.906662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.920750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.920769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.934494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.934512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.948464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.948483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.957290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.957308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.966446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.966464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.975398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.975416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.989846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.989866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:33.998562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:33.998580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:34.012837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:34.012856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:34.025884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:34.025902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:34.034456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:34.034473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:34.049000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:34.049018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:34.062504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:34.062522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:34.071385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:34.071403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:34.080082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:34.080099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.619 [2024-07-15 20:40:34.088704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.619 [2024-07-15 20:40:34.088722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.878 [2024-07-15 20:40:34.102835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.878 [2024-07-15 20:40:34.102853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.878 [2024-07-15 20:40:34.116353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.878 [2024-07-15 20:40:34.116371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.878 [2024-07-15 20:40:34.125028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.878 [2024-07-15 20:40:34.125046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.878 [2024-07-15 20:40:34.139864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.878 [2024-07-15 20:40:34.139883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.150513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.150531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.168068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.168087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.176661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.176695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.191484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.191502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.206577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.206595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.220725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.220743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.233930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.233948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 00:14:59.879 Latency(us) 00:14:59.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.879 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:59.879 Nvme1n1 : 5.01 16677.39 130.29 0.00 0.00 7667.17 2521.71 20857.54 00:14:59.879 =================================================================================================================== 00:14:59.879 Total : 16677.39 130.29 0.00 0.00 7667.17 2521.71 20857.54 00:14:59.879 [2024-07-15 20:40:34.242843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.242859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.254875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.254889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.266919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.266938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.278940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.278957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.290974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.290988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.302998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.303012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.315034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.315049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.327064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.327078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.339092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.339105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.879 [2024-07-15 20:40:34.351126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.879 [2024-07-15 20:40:34.351135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.138 [2024-07-15 20:40:34.363166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.138 [2024-07-15 20:40:34.363176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.138 [2024-07-15 20:40:34.375193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.138 [2024-07-15 20:40:34.375204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.138 [2024-07-15 20:40:34.387233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.138 [2024-07-15 20:40:34.387242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.138 [2024-07-15 20:40:34.399262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.138 [2024-07-15 20:40:34.399275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.138 [2024-07-15 20:40:34.411289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.138 [2024-07-15 20:40:34.411298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2662699) - No such process 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2662699 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 delay0 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.138 20:40:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:00.138 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.138 [2024-07-15 20:40:34.543548] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:06.700 Initializing NVMe Controllers 00:15:06.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:06.700 Initialization complete. Launching workers. 00:15:06.700 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 95 00:15:06.700 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 382, failed to submit 33 00:15:06.700 success 166, unsuccess 216, failed 0 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.700 rmmod nvme_tcp 00:15:06.700 rmmod nvme_fabrics 00:15:06.700 rmmod nvme_keyring 00:15:06.700 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2660815 ']' 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2660815 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2660815 ']' 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2660815 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2660815 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2660815' 00:15:06.701 killing process with pid 2660815 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2660815 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2660815 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.701 20:40:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.605 20:40:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:08.605 00:15:08.605 real 0m30.987s 00:15:08.605 user 0m42.592s 00:15:08.605 sys 0m10.108s 00:15:08.605 20:40:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.605 20:40:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.605 ************************************ 00:15:08.605 END TEST nvmf_zcopy 00:15:08.605 ************************************ 00:15:08.605 20:40:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:08.605 20:40:43 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:08.605 20:40:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.605 20:40:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.605 20:40:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:08.605 ************************************ 00:15:08.605 START TEST nvmf_nmic 00:15:08.605 ************************************ 00:15:08.605 20:40:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:08.863 * Looking for test storage... 00:15:08.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.863 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:08.864 20:40:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:14.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:14.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:14.161 Found net devices under 0000:86:00.0: cvl_0_0 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:14.161 Found net devices under 0000:86:00.1: cvl_0_1 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.161 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:14.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:15:14.162 00:15:14.162 --- 10.0.0.2 ping statistics --- 00:15:14.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.162 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:15:14.162 00:15:14.162 --- 10.0.0.1 ping statistics --- 00:15:14.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.162 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:14.162 20:40:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2667974 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2667974 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2667974 ']' 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.162 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 [2024-07-15 20:40:48.066655] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:15:14.162 [2024-07-15 20:40:48.066698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.162 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.162 [2024-07-15 20:40:48.123184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.162 [2024-07-15 20:40:48.197959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.162 [2024-07-15 20:40:48.198000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.162 [2024-07-15 20:40:48.198006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.162 [2024-07-15 20:40:48.198012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.162 [2024-07-15 20:40:48.198017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.162 [2024-07-15 20:40:48.198089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.162 [2024-07-15 20:40:48.198185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.162 [2024-07-15 20:40:48.198276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.162 [2024-07-15 20:40:48.198278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.420 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.420 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:14.420 20:40:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.420 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.420 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 [2024-07-15 20:40:48.908078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 Malloc0 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 [2024-07-15 20:40:48.959942] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:14.678 test case1: single bdev can't be used in multiple subsystems 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.679 [2024-07-15 20:40:48.983862] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:14.679 [2024-07-15 20:40:48.983882] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:14.679 [2024-07-15 20:40:48.983889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.679 request: 00:15:14.679 { 00:15:14.679 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:14.679 "namespace": { 00:15:14.679 "bdev_name": "Malloc0", 00:15:14.679 "no_auto_visible": false 00:15:14.679 }, 00:15:14.679 "method": "nvmf_subsystem_add_ns", 00:15:14.679 "req_id": 1 00:15:14.679 } 00:15:14.679 Got JSON-RPC error response 00:15:14.679 response: 00:15:14.679 { 00:15:14.679 "code": -32602, 00:15:14.679 "message": "Invalid parameters" 00:15:14.679 } 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:14.679 Adding namespace failed - expected result. 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:14.679 test case2: host connect to nvmf target in multiple paths 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.679 [2024-07-15 20:40:48.995979] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.679 20:40:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.611 20:40:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:16.983 20:40:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.983 20:40:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:16.983 20:40:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.983 20:40:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:16.983 20:40:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:18.882 20:40:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:18.882 20:40:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:18.882 20:40:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.882 20:40:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:18.882 20:40:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.882 20:40:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:18.882 20:40:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:18.882 [global] 00:15:18.882 thread=1 00:15:18.882 invalidate=1 00:15:18.882 rw=write 00:15:18.882 time_based=1 00:15:18.882 runtime=1 00:15:18.882 ioengine=libaio 00:15:18.882 direct=1 00:15:18.882 bs=4096 00:15:18.882 iodepth=1 00:15:18.882 norandommap=0 00:15:18.882 numjobs=1 00:15:18.882 00:15:18.882 verify_dump=1 00:15:18.882 verify_backlog=512 00:15:18.882 verify_state_save=0 00:15:18.882 do_verify=1 00:15:18.882 verify=crc32c-intel 00:15:18.882 [job0] 00:15:18.882 filename=/dev/nvme0n1 00:15:18.882 Could not set queue depth (nvme0n1) 00:15:19.139 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.139 fio-3.35 00:15:19.139 Starting 1 thread 00:15:20.510 00:15:20.510 job0: (groupid=0, jobs=1): err= 0: pid=2669012: Mon Jul 15 20:40:54 2024 00:15:20.510 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:15:20.510 slat (nsec): min=9858, max=22756, avg=20908.50, stdev=2541.81 00:15:20.510 clat (usec): min=40796, max=41665, avg=40996.68, stdev=165.77 00:15:20.510 lat (usec): min=40818, max=41675, avg=41017.59, stdev=163.41 00:15:20.510 clat percentiles (usec): 00:15:20.510 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:20.510 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:20.510 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:20.510 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:20.510 | 99.99th=[41681] 00:15:20.510 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:15:20.510 slat (nsec): min=10196, max=43320, avg=11526.86, stdev=2369.99 00:15:20.510 clat (usec): min=166, max=339, avg=195.22, stdev=15.13 00:15:20.510 lat (usec): min=178, max=364, avg=206.75, stdev=15.88 00:15:20.510 clat percentiles (usec): 00:15:20.510 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 188], 00:15:20.510 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 196], 00:15:20.510 | 70.00th=[ 198], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 212], 00:15:20.510 | 99.00th=[ 255], 99.50th=[ 306], 99.90th=[ 338], 99.95th=[ 338], 00:15:20.510 | 99.99th=[ 338] 00:15:20.510 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:20.510 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:20.510 lat (usec) : 250=94.38%, 500=1.50% 00:15:20.510 lat (msec) : 50=4.12% 00:15:20.510 cpu : usr=0.69%, sys=0.59%, ctx=534, majf=0, minf=2 00:15:20.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.510 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.510 00:15:20.510 Run status group 0 (all jobs): 00:15:20.510 READ: bw=87.1KiB/s (89.2kB/s), 87.1KiB/s-87.1KiB/s (89.2kB/s-89.2kB/s), io=88.0KiB (90.1kB), run=1010-1010msec 00:15:20.510 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:15:20.510 00:15:20.510 Disk stats (read/write): 00:15:20.510 nvme0n1: ios=68/512, merge=0/0, ticks=1134/95, in_queue=1229, util=95.49% 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.510 20:40:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.510 rmmod nvme_tcp 00:15:20.510 rmmod nvme_fabrics 00:15:20.768 rmmod nvme_keyring 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2667974 ']' 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2667974 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2667974 ']' 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2667974 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667974 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667974' 00:15:20.768 killing process with pid 2667974 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2667974 00:15:20.768 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2667974 00:15:21.026 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.026 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:21.026 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:21.026 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.026 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.026 20:40:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.026 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.026 20:40:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.925 20:40:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.926 00:15:22.926 real 0m14.275s 00:15:22.926 user 0m35.028s 00:15:22.926 sys 0m4.437s 00:15:22.926 20:40:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.926 20:40:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:22.926 ************************************ 00:15:22.926 END TEST nvmf_nmic 00:15:22.926 ************************************ 00:15:22.926 20:40:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:22.926 20:40:57 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:22.926 20:40:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:22.926 20:40:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.926 20:40:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:23.184 ************************************ 00:15:23.184 START TEST nvmf_fio_target 00:15:23.184 ************************************ 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:23.184 * Looking for test storage... 00:15:23.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:23.184 20:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:23.185 20:40:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:28.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:28.444 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:28.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:28.445 Found net devices under 0000:86:00.0: cvl_0_0 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:28.445 Found net devices under 0000:86:00.1: cvl_0_1 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:28.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:15:28.445 00:15:28.445 --- 10.0.0.2 ping statistics --- 00:15:28.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.445 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:15:28.445 00:15:28.445 --- 10.0.0.1 ping statistics --- 00:15:28.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.445 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2672671 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2672671 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2672671 ']' 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.445 20:41:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.445 [2024-07-15 20:41:02.514582] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:15:28.445 [2024-07-15 20:41:02.514627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.445 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.445 [2024-07-15 20:41:02.572716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.445 [2024-07-15 20:41:02.653000] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.445 [2024-07-15 20:41:02.653034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.445 [2024-07-15 20:41:02.653041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.445 [2024-07-15 20:41:02.653048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.445 [2024-07-15 20:41:02.653053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.445 [2024-07-15 20:41:02.653094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.445 [2024-07-15 20:41:02.653190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.445 [2024-07-15 20:41:02.653275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.445 [2024-07-15 20:41:02.653277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.008 20:41:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.008 20:41:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:29.008 20:41:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.008 20:41:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.008 20:41:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.008 20:41:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.008 20:41:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:29.265 [2024-07-15 20:41:03.531696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.265 20:41:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:29.523 20:41:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:29.523 20:41:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:29.523 20:41:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:29.523 20:41:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:29.780 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:29.780 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:30.037 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:30.037 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:30.295 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:30.295 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:30.295 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:30.552 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:30.552 20:41:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:30.810 20:41:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:30.810 20:41:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:30.810 20:41:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:31.067 20:41:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:31.067 20:41:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:31.324 20:41:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:31.325 20:41:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:31.582 20:41:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.582 [2024-07-15 20:41:05.993441] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.582 20:41:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:31.839 20:41:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:32.097 20:41:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.028 20:41:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:33.028 20:41:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:33.028 20:41:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.028 20:41:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:33.028 20:41:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:33.028 20:41:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:35.622 20:41:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:35.622 20:41:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:35.622 20:41:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.622 20:41:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:35.622 20:41:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.622 20:41:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:35.622 20:41:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:35.622 [global] 00:15:35.622 thread=1 00:15:35.622 invalidate=1 00:15:35.622 rw=write 00:15:35.622 time_based=1 00:15:35.622 runtime=1 00:15:35.622 ioengine=libaio 00:15:35.622 direct=1 00:15:35.622 bs=4096 00:15:35.622 iodepth=1 00:15:35.622 norandommap=0 00:15:35.622 numjobs=1 00:15:35.622 00:15:35.622 verify_dump=1 00:15:35.622 verify_backlog=512 00:15:35.622 verify_state_save=0 00:15:35.622 do_verify=1 00:15:35.622 verify=crc32c-intel 00:15:35.622 [job0] 00:15:35.622 filename=/dev/nvme0n1 00:15:35.622 [job1] 00:15:35.622 filename=/dev/nvme0n2 00:15:35.622 [job2] 00:15:35.622 filename=/dev/nvme0n3 00:15:35.622 [job3] 00:15:35.622 filename=/dev/nvme0n4 00:15:35.622 Could not set queue depth (nvme0n1) 00:15:35.622 Could not set queue depth (nvme0n2) 00:15:35.622 Could not set queue depth (nvme0n3) 00:15:35.622 Could not set queue depth (nvme0n4) 00:15:35.622 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:35.622 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:35.622 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:35.622 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:35.622 fio-3.35 00:15:35.622 Starting 4 threads 00:15:36.994 00:15:36.994 job0: (groupid=0, jobs=1): err= 0: pid=2674023: Mon Jul 15 20:41:11 2024 00:15:36.994 read: IOPS=235, BW=940KiB/s (963kB/s)(960KiB/1021msec) 00:15:36.994 slat (nsec): min=6857, max=35412, avg=9252.05, stdev=4682.41 00:15:36.994 clat (usec): min=268, max=41297, avg=3731.82, stdev=11256.49 00:15:36.994 lat (usec): min=275, max=41333, avg=3741.07, stdev=11260.88 00:15:36.994 clat percentiles (usec): 00:15:36.994 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 310], 00:15:36.994 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 367], 00:15:36.994 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 412], 95.00th=[41157], 00:15:36.994 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:36.994 | 99.99th=[41157] 00:15:36.994 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:15:36.994 slat (nsec): min=9973, max=56749, avg=12024.07, stdev=3720.47 00:15:36.994 clat (usec): min=163, max=782, avg=223.94, stdev=46.07 00:15:36.994 lat (usec): min=174, max=839, avg=235.96, stdev=48.27 00:15:36.994 clat percentiles (usec): 00:15:36.994 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:15:36.994 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:15:36.994 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 285], 00:15:36.994 | 99.00th=[ 388], 99.50th=[ 537], 99.90th=[ 783], 99.95th=[ 783], 00:15:36.994 | 99.99th=[ 783] 00:15:36.994 bw ( KiB/s): min= 4096, max= 4096, per=19.63%, avg=4096.00, stdev= 0.00, samples=1 00:15:36.994 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:36.994 lat (usec) : 250=60.77%, 500=36.04%, 750=0.40%, 1000=0.13% 00:15:36.994 lat (msec) : 50=2.66% 00:15:36.994 cpu : usr=0.49%, sys=0.78%, ctx=754, majf=0, minf=1 00:15:36.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.994 issued rwts: total=240,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.994 job1: (groupid=0, jobs=1): err= 0: pid=2674024: Mon Jul 15 20:41:11 2024 00:15:36.994 read: IOPS=524, BW=2096KiB/s (2147kB/s)(2176KiB/1038msec) 00:15:36.994 slat (nsec): min=7182, max=26437, avg=9011.14, stdev=2793.76 00:15:36.994 clat (usec): min=323, max=41040, avg=1473.77, stdev=6655.93 00:15:36.994 lat (usec): min=331, max=41063, avg=1482.78, stdev=6658.09 00:15:36.994 clat percentiles (usec): 00:15:36.994 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:15:36.994 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:15:36.994 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 371], 95.00th=[ 388], 00:15:36.994 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:36.994 | 99.99th=[41157] 00:15:36.994 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:15:36.994 slat (nsec): min=3238, max=46179, avg=11134.12, stdev=3803.96 00:15:36.994 clat (usec): min=142, max=727, avg=210.03, stdev=40.59 00:15:36.994 lat (usec): min=154, max=742, avg=221.16, stdev=41.21 00:15:36.994 clat percentiles (usec): 00:15:36.994 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 186], 00:15:36.994 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 215], 00:15:36.994 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 255], 95.00th=[ 269], 00:15:36.994 | 99.00th=[ 338], 99.50th=[ 392], 99.90th=[ 570], 99.95th=[ 725], 00:15:36.994 | 99.99th=[ 725] 00:15:36.994 bw ( KiB/s): min= 8192, max= 8192, per=39.25%, avg=8192.00, stdev= 0.00, samples=1 00:15:36.994 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:36.994 lat (usec) : 250=57.84%, 500=40.94%, 750=0.26% 00:15:36.994 lat (msec) : 50=0.96% 00:15:36.994 cpu : usr=0.96%, sys=2.51%, ctx=1570, majf=0, minf=2 00:15:36.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.994 issued rwts: total=544,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.994 job2: (groupid=0, jobs=1): err= 0: pid=2674031: Mon Jul 15 20:41:11 2024 00:15:36.994 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:36.994 slat (nsec): min=7422, max=37096, avg=8444.68, stdev=1259.47 00:15:36.994 clat (usec): min=256, max=564, avg=372.83, stdev=59.07 00:15:36.994 lat (usec): min=265, max=572, avg=381.27, stdev=59.15 00:15:36.994 clat percentiles (usec): 00:15:36.994 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 322], 00:15:36.994 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 371], 00:15:36.994 | 70.00th=[ 392], 80.00th=[ 437], 90.00th=[ 461], 95.00th=[ 482], 00:15:36.994 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 562], 99.95th=[ 562], 00:15:36.994 | 99.99th=[ 562] 00:15:36.995 write: IOPS=1830, BW=7321KiB/s (7496kB/s)(7328KiB/1001msec); 0 zone resets 00:15:36.995 slat (nsec): min=10972, max=74683, avg=12506.69, stdev=2484.18 00:15:36.995 clat (usec): min=158, max=4294, avg=208.11, stdev=102.04 00:15:36.995 lat (usec): min=170, max=4319, avg=220.62, stdev=102.68 00:15:36.995 clat percentiles (usec): 00:15:36.995 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:15:36.995 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:15:36.995 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 255], 00:15:36.995 | 99.00th=[ 314], 99.50th=[ 420], 99.90th=[ 668], 99.95th=[ 4293], 00:15:36.995 | 99.99th=[ 4293] 00:15:36.995 bw ( KiB/s): min= 8192, max= 8192, per=39.25%, avg=8192.00, stdev= 0.00, samples=1 00:15:36.995 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:36.995 lat (usec) : 250=51.22%, 500=47.74%, 750=1.01% 00:15:36.995 lat (msec) : 10=0.03% 00:15:36.995 cpu : usr=3.40%, sys=5.00%, ctx=3370, majf=0, minf=1 00:15:36.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.995 issued rwts: total=1536,1832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.995 job3: (groupid=0, jobs=1): err= 0: pid=2674032: Mon Jul 15 20:41:11 2024 00:15:36.995 read: IOPS=1545, BW=6182KiB/s (6330kB/s)(6188KiB/1001msec) 00:15:36.995 slat (nsec): min=7163, max=24055, avg=8103.13, stdev=1119.56 00:15:36.995 clat (usec): min=273, max=504, avg=344.92, stdev=46.28 00:15:36.995 lat (usec): min=281, max=512, avg=353.03, stdev=46.39 00:15:36.995 clat percentiles (usec): 00:15:36.995 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:15:36.995 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:15:36.995 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 420], 95.00th=[ 437], 00:15:36.995 | 99.00th=[ 457], 99.50th=[ 486], 99.90th=[ 498], 99.95th=[ 506], 00:15:36.995 | 99.99th=[ 506] 00:15:36.995 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:36.995 slat (nsec): min=10273, max=43673, avg=11880.00, stdev=2071.89 00:15:36.995 clat (usec): min=153, max=4058, avg=204.71, stdev=92.10 00:15:36.995 lat (usec): min=165, max=4069, avg=216.59, stdev=92.25 00:15:36.995 clat percentiles (usec): 00:15:36.995 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:15:36.995 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:15:36.995 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 241], 00:15:36.995 | 99.00th=[ 289], 99.50th=[ 338], 99.90th=[ 709], 99.95th=[ 1106], 00:15:36.995 | 99.99th=[ 4047] 00:15:36.995 bw ( KiB/s): min= 8192, max= 8192, per=39.25%, avg=8192.00, stdev= 0.00, samples=1 00:15:36.995 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:36.995 lat (usec) : 250=55.05%, 500=44.78%, 750=0.11% 00:15:36.995 lat (msec) : 2=0.03%, 10=0.03% 00:15:36.995 cpu : usr=3.80%, sys=5.00%, ctx=3595, majf=0, minf=1 00:15:36.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.995 issued rwts: total=1547,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.995 00:15:36.995 Run status group 0 (all jobs): 00:15:36.995 READ: bw=14.6MiB/s (15.3MB/s), 940KiB/s-6182KiB/s (963kB/s-6330kB/s), io=15.1MiB (15.8MB), run=1001-1038msec 00:15:36.995 WRITE: bw=20.4MiB/s (21.4MB/s), 2006KiB/s-8184KiB/s (2054kB/s-8380kB/s), io=21.2MiB (22.2MB), run=1001-1038msec 00:15:36.995 00:15:36.995 Disk stats (read/write): 00:15:36.995 nvme0n1: ios=259/512, merge=0/0, ticks=1553/111, in_queue=1664, util=85.77% 00:15:36.995 nvme0n2: ios=594/1024, merge=0/0, ticks=672/196, in_queue=868, util=91.17% 00:15:36.995 nvme0n3: ios=1318/1536, merge=0/0, ticks=1380/315, in_queue=1695, util=93.55% 00:15:36.995 nvme0n4: ios=1486/1536, merge=0/0, ticks=562/298, in_queue=860, util=95.39% 00:15:36.995 20:41:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:36.995 [global] 00:15:36.995 thread=1 00:15:36.995 invalidate=1 00:15:36.995 rw=randwrite 00:15:36.995 time_based=1 00:15:36.995 runtime=1 00:15:36.995 ioengine=libaio 00:15:36.995 direct=1 00:15:36.995 bs=4096 00:15:36.995 iodepth=1 00:15:36.995 norandommap=0 00:15:36.995 numjobs=1 00:15:36.995 00:15:36.995 verify_dump=1 00:15:36.995 verify_backlog=512 00:15:36.995 verify_state_save=0 00:15:36.995 do_verify=1 00:15:36.995 verify=crc32c-intel 00:15:36.995 [job0] 00:15:36.995 filename=/dev/nvme0n1 00:15:36.995 [job1] 00:15:36.995 filename=/dev/nvme0n2 00:15:36.995 [job2] 00:15:36.995 filename=/dev/nvme0n3 00:15:36.995 [job3] 00:15:36.995 filename=/dev/nvme0n4 00:15:36.995 Could not set queue depth (nvme0n1) 00:15:36.995 Could not set queue depth (nvme0n2) 00:15:36.995 Could not set queue depth (nvme0n3) 00:15:36.995 Could not set queue depth (nvme0n4) 00:15:36.995 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:36.995 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:36.995 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:36.995 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:36.995 fio-3.35 00:15:36.995 Starting 4 threads 00:15:38.367 00:15:38.367 job0: (groupid=0, jobs=1): err= 0: pid=2674402: Mon Jul 15 20:41:12 2024 00:15:38.367 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:15:38.367 slat (nsec): min=10886, max=28331, avg=23615.09, stdev=3252.76 00:15:38.367 clat (usec): min=40881, max=41482, avg=40988.03, stdev=119.88 00:15:38.367 lat (usec): min=40905, max=41493, avg=41011.65, stdev=117.12 00:15:38.367 clat percentiles (usec): 00:15:38.367 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:38.368 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:38.368 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:38.368 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:38.368 | 99.99th=[41681] 00:15:38.368 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:15:38.368 slat (nsec): min=11372, max=37561, avg=13143.80, stdev=1894.81 00:15:38.368 clat (usec): min=170, max=344, avg=238.22, stdev=32.94 00:15:38.368 lat (usec): min=182, max=382, avg=251.37, stdev=33.28 00:15:38.368 clat percentiles (usec): 00:15:38.368 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 208], 00:15:38.368 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 243], 00:15:38.368 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:15:38.368 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:15:38.368 | 99.99th=[ 347] 00:15:38.368 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:15:38.368 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:38.368 lat (usec) : 250=62.55%, 500=33.33% 00:15:38.368 lat (msec) : 50=4.12% 00:15:38.368 cpu : usr=0.29%, sys=1.07%, ctx=535, majf=0, minf=1 00:15:38.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.368 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.368 job1: (groupid=0, jobs=1): err= 0: pid=2674403: Mon Jul 15 20:41:12 2024 00:15:38.368 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:15:38.368 slat (nsec): min=7921, max=26760, avg=21560.48, stdev=4141.80 00:15:38.368 clat (usec): min=261, max=41391, avg=39219.17, stdev=8493.15 00:15:38.368 lat (usec): min=270, max=41399, avg=39240.73, stdev=8495.71 00:15:38.368 clat percentiles (usec): 00:15:38.368 | 1.00th=[ 262], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:38.368 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:38.368 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:38.368 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:38.368 | 99.99th=[41157] 00:15:38.368 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:15:38.368 slat (nsec): min=10666, max=36669, avg=12708.46, stdev=2443.36 00:15:38.368 clat (usec): min=162, max=394, avg=197.04, stdev=20.79 00:15:38.368 lat (usec): min=173, max=431, avg=209.74, stdev=21.69 00:15:38.368 clat percentiles (usec): 00:15:38.368 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:15:38.368 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:15:38.368 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:15:38.368 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 396], 99.95th=[ 396], 00:15:38.368 | 99.99th=[ 396] 00:15:38.368 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:15:38.368 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:38.368 lat (usec) : 250=94.77%, 500=1.12% 00:15:38.368 lat (msec) : 50=4.11% 00:15:38.368 cpu : usr=0.89%, sys=0.59%, ctx=536, majf=0, minf=1 00:15:38.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.368 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.368 job2: (groupid=0, jobs=1): err= 0: pid=2674404: Mon Jul 15 20:41:12 2024 00:15:38.368 read: IOPS=1827, BW=7309KiB/s (7484kB/s)(7316KiB/1001msec) 00:15:38.368 slat (nsec): min=6845, max=24519, avg=8193.45, stdev=1102.65 00:15:38.368 clat (usec): min=251, max=1659, avg=289.75, stdev=39.68 00:15:38.368 lat (usec): min=259, max=1667, avg=297.94, stdev=39.75 00:15:38.368 clat percentiles (usec): 00:15:38.368 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:15:38.368 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:15:38.368 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 338], 00:15:38.368 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 586], 99.95th=[ 1663], 00:15:38.368 | 99.99th=[ 1663] 00:15:38.368 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:38.368 slat (nsec): min=10536, max=36028, avg=12000.26, stdev=1812.71 00:15:38.368 clat (usec): min=164, max=1275, avg=204.49, stdev=43.39 00:15:38.368 lat (usec): min=176, max=1287, avg=216.49, stdev=43.90 00:15:38.368 clat percentiles (usec): 00:15:38.368 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:15:38.368 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:15:38.368 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 247], 95.00th=[ 265], 00:15:38.368 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 816], 99.95th=[ 873], 00:15:38.368 | 99.99th=[ 1270] 00:15:38.368 bw ( KiB/s): min= 8192, max= 8192, per=59.03%, avg=8192.00, stdev= 0.00, samples=1 00:15:38.368 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:38.368 lat (usec) : 250=48.28%, 500=51.56%, 750=0.05%, 1000=0.05% 00:15:38.368 lat (msec) : 2=0.05% 00:15:38.368 cpu : usr=3.30%, sys=6.00%, ctx=3878, majf=0, minf=1 00:15:38.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.368 issued rwts: total=1829,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.368 job3: (groupid=0, jobs=1): err= 0: pid=2674405: Mon Jul 15 20:41:12 2024 00:15:38.368 read: IOPS=97, BW=392KiB/s (401kB/s)(392KiB/1001msec) 00:15:38.368 slat (nsec): min=6926, max=24867, avg=10751.02, stdev=5937.68 00:15:38.368 clat (usec): min=318, max=41496, avg=8869.21, stdev=16498.66 00:15:38.368 lat (usec): min=326, max=41507, avg=8879.96, stdev=16503.37 00:15:38.368 clat percentiles (usec): 00:15:38.368 | 1.00th=[ 318], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 351], 00:15:38.368 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:15:38.368 | 70.00th=[ 383], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:15:38.368 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:38.368 | 99.99th=[41681] 00:15:38.368 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:15:38.368 slat (nsec): min=10540, max=48996, avg=12434.86, stdev=2518.85 00:15:38.368 clat (usec): min=180, max=393, avg=238.11, stdev=27.60 00:15:38.368 lat (usec): min=192, max=405, avg=250.54, stdev=27.86 00:15:38.368 clat percentiles (usec): 00:15:38.368 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:15:38.368 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 243], 00:15:38.368 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:15:38.368 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 396], 99.95th=[ 396], 00:15:38.368 | 99.99th=[ 396] 00:15:38.368 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:15:38.368 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:38.368 lat (usec) : 250=57.05%, 500=39.34%, 750=0.16% 00:15:38.368 lat (msec) : 50=3.44% 00:15:38.368 cpu : usr=0.30%, sys=0.80%, ctx=610, majf=0, minf=2 00:15:38.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.368 issued rwts: total=98,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.368 00:15:38.368 Run status group 0 (all jobs): 00:15:38.368 READ: bw=7636KiB/s (7819kB/s), 85.2KiB/s-7309KiB/s (87.2kB/s-7484kB/s), io=7888KiB (8077kB), run=1001-1033msec 00:15:38.368 WRITE: bw=13.6MiB/s (14.2MB/s), 1983KiB/s-8184KiB/s (2030kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1033msec 00:15:38.368 00:15:38.368 Disk stats (read/write): 00:15:38.368 nvme0n1: ios=42/512, merge=0/0, ticks=1683/119, in_queue=1802, util=98.30% 00:15:38.368 nvme0n2: ios=56/512, merge=0/0, ticks=1661/90, in_queue=1751, util=96.85% 00:15:38.368 nvme0n3: ios=1562/1779, merge=0/0, ticks=1417/345, in_queue=1762, util=98.44% 00:15:38.368 nvme0n4: ios=44/512, merge=0/0, ticks=805/121, in_queue=926, util=90.99% 00:15:38.368 20:41:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:38.368 [global] 00:15:38.368 thread=1 00:15:38.368 invalidate=1 00:15:38.368 rw=write 00:15:38.368 time_based=1 00:15:38.368 runtime=1 00:15:38.368 ioengine=libaio 00:15:38.368 direct=1 00:15:38.368 bs=4096 00:15:38.368 iodepth=128 00:15:38.368 norandommap=0 00:15:38.368 numjobs=1 00:15:38.368 00:15:38.368 verify_dump=1 00:15:38.368 verify_backlog=512 00:15:38.368 verify_state_save=0 00:15:38.368 do_verify=1 00:15:38.368 verify=crc32c-intel 00:15:38.368 [job0] 00:15:38.368 filename=/dev/nvme0n1 00:15:38.368 [job1] 00:15:38.368 filename=/dev/nvme0n2 00:15:38.368 [job2] 00:15:38.368 filename=/dev/nvme0n3 00:15:38.368 [job3] 00:15:38.368 filename=/dev/nvme0n4 00:15:38.368 Could not set queue depth (nvme0n1) 00:15:38.368 Could not set queue depth (nvme0n2) 00:15:38.368 Could not set queue depth (nvme0n3) 00:15:38.368 Could not set queue depth (nvme0n4) 00:15:38.626 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:38.626 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:38.626 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:38.626 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:38.626 fio-3.35 00:15:38.626 Starting 4 threads 00:15:39.998 00:15:39.998 job0: (groupid=0, jobs=1): err= 0: pid=2674774: Mon Jul 15 20:41:14 2024 00:15:39.998 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:15:39.998 slat (nsec): min=1134, max=24930k, avg=99616.41, stdev=631091.04 00:15:39.998 clat (usec): min=5051, max=70221, avg=13578.69, stdev=8033.60 00:15:39.998 lat (usec): min=5053, max=70236, avg=13678.31, stdev=8071.75 00:15:39.998 clat percentiles (usec): 00:15:39.998 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10814], 00:15:39.998 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:15:39.998 | 70.00th=[12256], 80.00th=[13042], 90.00th=[18744], 95.00th=[21627], 00:15:39.998 | 99.00th=[63177], 99.50th=[65274], 99.90th=[69731], 99.95th=[69731], 00:15:39.998 | 99.99th=[69731] 00:15:39.998 write: IOPS=5137, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1003msec); 0 zone resets 00:15:39.998 slat (usec): min=2, max=12917, avg=91.29, stdev=486.36 00:15:39.998 clat (usec): min=305, max=33125, avg=11189.72, stdev=3014.36 00:15:39.998 lat (usec): min=997, max=33134, avg=11281.01, stdev=3024.09 00:15:39.998 clat percentiles (usec): 00:15:39.998 | 1.00th=[ 2606], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[10290], 00:15:39.998 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:15:39.998 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12780], 95.00th=[17171], 00:15:39.998 | 99.00th=[23462], 99.50th=[24773], 99.90th=[33162], 99.95th=[33162], 00:15:39.998 | 99.99th=[33162] 00:15:39.998 bw ( KiB/s): min=18656, max=22304, per=29.36%, avg=20480.00, stdev=2579.53, samples=2 00:15:39.998 iops : min= 4664, max= 5576, avg=5120.00, stdev=644.88, samples=2 00:15:39.998 lat (usec) : 500=0.01%, 1000=0.02% 00:15:39.998 lat (msec) : 2=0.13%, 4=0.62%, 10=12.66%, 20=82.29%, 50=3.34% 00:15:39.998 lat (msec) : 100=0.92% 00:15:39.998 cpu : usr=3.09%, sys=4.09%, ctx=610, majf=0, minf=1 00:15:39.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:39.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.998 issued rwts: total=5120,5153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.998 job1: (groupid=0, jobs=1): err= 0: pid=2674775: Mon Jul 15 20:41:14 2024 00:15:39.998 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:15:39.998 slat (nsec): min=1238, max=22985k, avg=116847.28, stdev=862355.84 00:15:39.998 clat (usec): min=3103, max=53402, avg=15516.02, stdev=7188.50 00:15:39.998 lat (usec): min=3111, max=53411, avg=15632.87, stdev=7247.70 00:15:39.998 clat percentiles (usec): 00:15:39.998 | 1.00th=[ 4490], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11338], 00:15:39.999 | 30.00th=[12125], 40.00th=[13042], 50.00th=[13698], 60.00th=[14222], 00:15:39.999 | 70.00th=[15270], 80.00th=[17695], 90.00th=[23987], 95.00th=[31327], 00:15:39.999 | 99.00th=[43779], 99.50th=[45351], 99.90th=[53216], 99.95th=[53216], 00:15:39.999 | 99.99th=[53216] 00:15:39.999 write: IOPS=4149, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1008msec); 0 zone resets 00:15:39.999 slat (usec): min=2, max=22291, avg=116.52, stdev=848.39 00:15:39.999 clat (usec): min=3859, max=61297, avg=15042.47, stdev=9971.80 00:15:39.999 lat (usec): min=4707, max=61300, avg=15158.99, stdev=10039.54 00:15:39.999 clat percentiles (usec): 00:15:39.999 | 1.00th=[ 5407], 5.00th=[ 7177], 10.00th=[ 8455], 20.00th=[ 9896], 00:15:39.999 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11994], 60.00th=[12518], 00:15:39.999 | 70.00th=[13435], 80.00th=[17957], 90.00th=[28443], 95.00th=[33817], 00:15:39.999 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:15:39.999 | 99.99th=[61080] 00:15:39.999 bw ( KiB/s): min=12288, max=20480, per=23.49%, avg=16384.00, stdev=5792.62, samples=2 00:15:39.999 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:15:39.999 lat (msec) : 4=0.41%, 10=13.99%, 20=69.82%, 50=14.42%, 100=1.36% 00:15:39.999 cpu : usr=3.38%, sys=4.87%, ctx=281, majf=0, minf=1 00:15:39.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:39.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.999 issued rwts: total=4096,4183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.999 job2: (groupid=0, jobs=1): err= 0: pid=2674780: Mon Jul 15 20:41:14 2024 00:15:39.999 read: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1006msec) 00:15:39.999 slat (nsec): min=1355, max=13716k, avg=144578.64, stdev=957653.75 00:15:39.999 clat (usec): min=3623, max=58696, avg=16725.52, stdev=7380.08 00:15:39.999 lat (usec): min=4821, max=58705, avg=16870.10, stdev=7459.67 00:15:39.999 clat percentiles (usec): 00:15:39.999 | 1.00th=[ 5866], 5.00th=[10945], 10.00th=[11863], 20.00th=[12387], 00:15:39.999 | 30.00th=[12649], 40.00th=[14484], 50.00th=[14877], 60.00th=[15401], 00:15:39.999 | 70.00th=[16909], 80.00th=[20055], 90.00th=[22414], 95.00th=[29754], 00:15:39.999 | 99.00th=[50594], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:15:39.999 | 99.99th=[58459] 00:15:39.999 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:15:39.999 slat (usec): min=3, max=21169, avg=171.79, stdev=942.90 00:15:39.999 clat (usec): min=807, max=63045, avg=24899.66, stdev=14376.77 00:15:39.999 lat (usec): min=828, max=63055, avg=25071.45, stdev=14469.49 00:15:39.999 clat percentiles (usec): 00:15:39.999 | 1.00th=[ 4555], 5.00th=[ 6456], 10.00th=[ 9503], 20.00th=[11863], 00:15:39.999 | 30.00th=[12125], 40.00th=[19006], 50.00th=[20841], 60.00th=[25560], 00:15:39.999 | 70.00th=[34866], 80.00th=[40109], 90.00th=[45351], 95.00th=[47973], 00:15:39.999 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:15:39.999 | 99.99th=[63177] 00:15:39.999 bw ( KiB/s): min=10000, max=14576, per=17.62%, avg=12288.00, stdev=3235.72, samples=2 00:15:39.999 iops : min= 2500, max= 3644, avg=3072.00, stdev=808.93, samples=2 00:15:39.999 lat (usec) : 1000=0.07% 00:15:39.999 lat (msec) : 4=0.21%, 10=7.16%, 20=54.13%, 50=36.29%, 100=2.14% 00:15:39.999 cpu : usr=2.59%, sys=4.48%, ctx=354, majf=0, minf=1 00:15:39.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:39.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.999 issued rwts: total=3056,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.999 job3: (groupid=0, jobs=1): err= 0: pid=2674781: Mon Jul 15 20:41:14 2024 00:15:39.999 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:15:39.999 slat (nsec): min=1532, max=22337k, avg=102079.97, stdev=799936.08 00:15:39.999 clat (usec): min=5462, max=54772, avg=13482.78, stdev=5435.88 00:15:39.999 lat (usec): min=5468, max=54791, avg=13584.86, stdev=5498.30 00:15:39.999 clat percentiles (usec): 00:15:39.999 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10552], 00:15:39.999 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12387], 00:15:39.999 | 70.00th=[13042], 80.00th=[14353], 90.00th=[20841], 95.00th=[28967], 00:15:39.999 | 99.00th=[33162], 99.50th=[33162], 99.90th=[34866], 99.95th=[43254], 00:15:39.999 | 99.99th=[54789] 00:15:39.999 write: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(20.2MiB/1002msec); 0 zone resets 00:15:39.999 slat (usec): min=2, max=10524, avg=85.34, stdev=508.86 00:15:39.999 clat (usec): min=503, max=33081, avg=11216.20, stdev=2316.51 00:15:39.999 lat (usec): min=1669, max=33094, avg=11301.54, stdev=2336.98 00:15:39.999 clat percentiles (usec): 00:15:39.999 | 1.00th=[ 4686], 5.00th=[ 6980], 10.00th=[ 8717], 20.00th=[10421], 00:15:39.999 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:15:39.999 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12780], 95.00th=[14746], 00:15:39.999 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[33162], 00:15:39.999 | 99.99th=[33162] 00:15:39.999 bw ( KiB/s): min=20336, max=20336, per=29.15%, avg=20336.00, stdev= 0.00, samples=1 00:15:39.999 iops : min= 5084, max= 5084, avg=5084.00, stdev= 0.00, samples=1 00:15:39.999 lat (usec) : 750=0.01% 00:15:39.999 lat (msec) : 2=0.05%, 4=0.11%, 10=14.29%, 20=79.72%, 50=5.82% 00:15:39.999 lat (msec) : 100=0.01% 00:15:39.999 cpu : usr=3.90%, sys=5.89%, ctx=488, majf=0, minf=1 00:15:39.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:39.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.999 issued rwts: total=5120,5170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.999 00:15:39.999 Run status group 0 (all jobs): 00:15:39.999 READ: bw=67.4MiB/s (70.7MB/s), 11.9MiB/s-20.0MiB/s (12.4MB/s-20.9MB/s), io=67.9MiB (71.2MB), run=1002-1008msec 00:15:39.999 WRITE: bw=68.1MiB/s (71.4MB/s), 11.9MiB/s-20.2MiB/s (12.5MB/s-21.1MB/s), io=68.7MiB (72.0MB), run=1002-1008msec 00:15:39.999 00:15:39.999 Disk stats (read/write): 00:15:39.999 nvme0n1: ios=4149/4461, merge=0/0, ticks=21186/17848, in_queue=39034, util=98.30% 00:15:39.999 nvme0n2: ios=3519/3584, merge=0/0, ticks=31128/28661, in_queue=59789, util=86.60% 00:15:39.999 nvme0n3: ios=2618/2775, merge=0/0, ticks=41120/61019, in_queue=102139, util=98.54% 00:15:39.999 nvme0n4: ios=4133/4553, merge=0/0, ticks=39943/30941, in_queue=70884, util=97.90% 00:15:39.999 20:41:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:39.999 [global] 00:15:39.999 thread=1 00:15:39.999 invalidate=1 00:15:39.999 rw=randwrite 00:15:39.999 time_based=1 00:15:39.999 runtime=1 00:15:39.999 ioengine=libaio 00:15:39.999 direct=1 00:15:39.999 bs=4096 00:15:39.999 iodepth=128 00:15:39.999 norandommap=0 00:15:39.999 numjobs=1 00:15:39.999 00:15:39.999 verify_dump=1 00:15:39.999 verify_backlog=512 00:15:39.999 verify_state_save=0 00:15:39.999 do_verify=1 00:15:39.999 verify=crc32c-intel 00:15:39.999 [job0] 00:15:39.999 filename=/dev/nvme0n1 00:15:39.999 [job1] 00:15:39.999 filename=/dev/nvme0n2 00:15:39.999 [job2] 00:15:39.999 filename=/dev/nvme0n3 00:15:39.999 [job3] 00:15:39.999 filename=/dev/nvme0n4 00:15:39.999 Could not set queue depth (nvme0n1) 00:15:39.999 Could not set queue depth (nvme0n2) 00:15:39.999 Could not set queue depth (nvme0n3) 00:15:39.999 Could not set queue depth (nvme0n4) 00:15:40.256 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.256 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.256 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.256 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.256 fio-3.35 00:15:40.256 Starting 4 threads 00:15:41.638 00:15:41.638 job0: (groupid=0, jobs=1): err= 0: pid=2675163: Mon Jul 15 20:41:15 2024 00:15:41.638 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:15:41.638 slat (nsec): min=1071, max=18299k, avg=93819.87, stdev=711414.08 00:15:41.638 clat (usec): min=1747, max=43630, avg=12970.88, stdev=5812.73 00:15:41.638 lat (usec): min=1752, max=43633, avg=13064.70, stdev=5848.78 00:15:41.638 clat percentiles (usec): 00:15:41.638 | 1.00th=[ 2343], 5.00th=[ 4752], 10.00th=[ 5473], 20.00th=[10028], 00:15:41.638 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12518], 60.00th=[13173], 00:15:41.638 | 70.00th=[14091], 80.00th=[14877], 90.00th=[19268], 95.00th=[22152], 00:15:41.638 | 99.00th=[32900], 99.50th=[36439], 99.90th=[42206], 99.95th=[42206], 00:15:41.638 | 99.99th=[43779] 00:15:41.638 write: IOPS=5134, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1002msec); 0 zone resets 00:15:41.638 slat (nsec): min=1744, max=17265k, avg=82166.12, stdev=618769.99 00:15:41.638 clat (usec): min=471, max=41405, avg=11805.57, stdev=5835.90 00:15:41.638 lat (usec): min=482, max=41414, avg=11887.74, stdev=5846.80 00:15:41.638 clat percentiles (usec): 00:15:41.638 | 1.00th=[ 938], 5.00th=[ 2966], 10.00th=[ 5342], 20.00th=[ 8717], 00:15:41.638 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11863], 60.00th=[12125], 00:15:41.638 | 70.00th=[12518], 80.00th=[13960], 90.00th=[15664], 95.00th=[20579], 00:15:41.638 | 99.00th=[33817], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:15:41.638 | 99.99th=[41157] 00:15:41.638 bw ( KiB/s): min=18232, max=22728, per=27.67%, avg=20480.00, stdev=3179.15, samples=2 00:15:41.638 iops : min= 4558, max= 5682, avg=5120.00, stdev=794.79, samples=2 00:15:41.638 lat (usec) : 500=0.03%, 750=0.02%, 1000=0.55% 00:15:41.638 lat (msec) : 2=1.80%, 4=1.78%, 10=20.19%, 20=67.76%, 50=7.86% 00:15:41.638 cpu : usr=3.50%, sys=4.40%, ctx=440, majf=0, minf=1 00:15:41.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:41.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.638 issued rwts: total=5120,5145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.638 job1: (groupid=0, jobs=1): err= 0: pid=2675175: Mon Jul 15 20:41:15 2024 00:15:41.638 read: IOPS=4443, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1005msec) 00:15:41.638 slat (nsec): min=1450, max=18591k, avg=124320.32, stdev=913768.27 00:15:41.638 clat (usec): min=3497, max=80297, avg=14441.40, stdev=9365.50 00:15:41.638 lat (usec): min=4071, max=80306, avg=14565.72, stdev=9449.41 00:15:41.638 clat percentiles (usec): 00:15:41.638 | 1.00th=[ 5407], 5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 9372], 00:15:41.638 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:15:41.638 | 70.00th=[13566], 80.00th=[15926], 90.00th=[20841], 95.00th=[31589], 00:15:41.638 | 99.00th=[58983], 99.50th=[61080], 99.90th=[80217], 99.95th=[80217], 00:15:41.638 | 99.99th=[80217] 00:15:41.638 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:15:41.638 slat (usec): min=2, max=10212, avg=90.87, stdev=524.04 00:15:41.638 clat (usec): min=1651, max=80287, avg=13647.54, stdev=10351.60 00:15:41.638 lat (usec): min=1664, max=80304, avg=13738.41, stdev=10407.88 00:15:41.638 clat percentiles (usec): 00:15:41.638 | 1.00th=[ 3392], 5.00th=[ 5669], 10.00th=[ 7111], 20.00th=[ 8586], 00:15:41.638 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11731], 60.00th=[11863], 00:15:41.638 | 70.00th=[12125], 80.00th=[13698], 90.00th=[21365], 95.00th=[32900], 00:15:41.638 | 99.00th=[67634], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:15:41.638 | 99.99th=[80217] 00:15:41.638 bw ( KiB/s): min=16384, max=20480, per=24.90%, avg=18432.00, stdev=2896.31, samples=2 00:15:41.638 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:15:41.638 lat (msec) : 2=0.04%, 4=0.98%, 10=24.36%, 20=63.85%, 50=8.68% 00:15:41.638 lat (msec) : 100=2.08% 00:15:41.638 cpu : usr=3.09%, sys=5.38%, ctx=560, majf=0, minf=1 00:15:41.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:41.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.638 issued rwts: total=4466,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.638 job2: (groupid=0, jobs=1): err= 0: pid=2675194: Mon Jul 15 20:41:15 2024 00:15:41.638 read: IOPS=3993, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1003msec) 00:15:41.638 slat (nsec): min=1283, max=4401.5k, avg=100636.86, stdev=530405.01 00:15:41.638 clat (usec): min=622, max=17874, avg=12945.65, stdev=1686.94 00:15:41.638 lat (usec): min=4697, max=18344, avg=13046.28, stdev=1690.04 00:15:41.638 clat percentiles (usec): 00:15:41.638 | 1.00th=[ 5145], 5.00th=[10159], 10.00th=[10945], 20.00th=[11863], 00:15:41.638 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13566], 00:15:41.638 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14877], 95.00th=[15270], 00:15:41.638 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17695], 99.95th=[17695], 00:15:41.638 | 99.99th=[17957] 00:15:41.638 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:15:41.638 slat (usec): min=2, max=16883, avg=140.93, stdev=822.45 00:15:41.638 clat (msec): min=6, max=154, avg=18.05, stdev=22.43 00:15:41.638 lat (msec): min=6, max=154, avg=18.19, stdev=22.57 00:15:41.638 clat percentiles (msec): 00:15:41.638 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:15:41.638 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 14], 00:15:41.638 | 70.00th=[ 14], 80.00th=[ 14], 90.00th=[ 15], 95.00th=[ 67], 00:15:41.638 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:15:41.638 | 99.99th=[ 155] 00:15:41.638 bw ( KiB/s): min=13320, max=19448, per=22.14%, avg=16384.00, stdev=4333.15, samples=2 00:15:41.638 iops : min= 3330, max= 4862, avg=4096.00, stdev=1083.29, samples=2 00:15:41.638 lat (usec) : 750=0.01% 00:15:41.638 lat (msec) : 10=3.06%, 20=93.94%, 50=0.30%, 100=1.12%, 250=1.57% 00:15:41.638 cpu : usr=3.49%, sys=3.99%, ctx=470, majf=0, minf=1 00:15:41.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:41.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.638 issued rwts: total=4005,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.638 job3: (groupid=0, jobs=1): err= 0: pid=2675200: Mon Jul 15 20:41:15 2024 00:15:41.638 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:15:41.638 slat (nsec): min=1517, max=9881.7k, avg=109323.69, stdev=600882.27 00:15:41.638 clat (usec): min=1016, max=26868, avg=14107.00, stdev=2432.83 00:15:41.638 lat (usec): min=1028, max=26888, avg=14216.33, stdev=2425.50 00:15:41.638 clat percentiles (usec): 00:15:41.638 | 1.00th=[ 9896], 5.00th=[11207], 10.00th=[11600], 20.00th=[12911], 00:15:41.638 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14222], 00:15:41.638 | 70.00th=[14746], 80.00th=[15139], 90.00th=[16188], 95.00th=[18482], 00:15:41.638 | 99.00th=[22414], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:15:41.638 | 99.99th=[26870] 00:15:41.639 write: IOPS=4736, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1002msec); 0 zone resets 00:15:41.639 slat (usec): min=2, max=13521, avg=95.51, stdev=541.54 00:15:41.639 clat (usec): min=537, max=26816, avg=13076.22, stdev=2493.02 00:15:41.639 lat (usec): min=2137, max=26826, avg=13171.73, stdev=2481.72 00:15:41.639 clat percentiles (usec): 00:15:41.639 | 1.00th=[ 4228], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11731], 00:15:41.639 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13304], 60.00th=[13698], 00:15:41.639 | 70.00th=[14091], 80.00th=[14353], 90.00th=[15008], 95.00th=[16712], 00:15:41.639 | 99.00th=[22414], 99.50th=[22414], 99.90th=[26870], 99.95th=[26870], 00:15:41.639 | 99.99th=[26870] 00:15:41.639 bw ( KiB/s): min=16560, max=20384, per=24.96%, avg=18472.00, stdev=2703.98, samples=2 00:15:41.639 iops : min= 4140, max= 5096, avg=4618.00, stdev=675.99, samples=2 00:15:41.639 lat (usec) : 750=0.01% 00:15:41.639 lat (msec) : 2=0.38%, 4=0.25%, 10=4.20%, 20=93.60%, 50=1.56% 00:15:41.639 cpu : usr=4.00%, sys=4.70%, ctx=476, majf=0, minf=1 00:15:41.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:41.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.639 issued rwts: total=4608,4746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.639 00:15:41.639 Run status group 0 (all jobs): 00:15:41.639 READ: bw=70.7MiB/s (74.2MB/s), 15.6MiB/s-20.0MiB/s (16.4MB/s-20.9MB/s), io=71.1MiB (74.5MB), run=1002-1005msec 00:15:41.639 WRITE: bw=72.3MiB/s (75.8MB/s), 16.0MiB/s-20.1MiB/s (16.7MB/s-21.0MB/s), io=72.6MiB (76.2MB), run=1002-1005msec 00:15:41.639 00:15:41.639 Disk stats (read/write): 00:15:41.639 nvme0n1: ios=4195/4608, merge=0/0, ticks=40319/34641, in_queue=74960, util=93.89% 00:15:41.639 nvme0n2: ios=3636/3855, merge=0/0, ticks=51848/53367, in_queue=105215, util=98.38% 00:15:41.639 nvme0n3: ios=3155/3584, merge=0/0, ticks=14034/21554, in_queue=35588, util=96.25% 00:15:41.639 nvme0n4: ios=3947/4096, merge=0/0, ticks=23032/22559, in_queue=45591, util=99.48% 00:15:41.639 20:41:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:41.639 20:41:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2675380 00:15:41.639 20:41:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:41.639 20:41:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:41.639 [global] 00:15:41.639 thread=1 00:15:41.639 invalidate=1 00:15:41.639 rw=read 00:15:41.639 time_based=1 00:15:41.639 runtime=10 00:15:41.639 ioengine=libaio 00:15:41.639 direct=1 00:15:41.639 bs=4096 00:15:41.639 iodepth=1 00:15:41.639 norandommap=1 00:15:41.639 numjobs=1 00:15:41.639 00:15:41.639 [job0] 00:15:41.639 filename=/dev/nvme0n1 00:15:41.639 [job1] 00:15:41.639 filename=/dev/nvme0n2 00:15:41.639 [job2] 00:15:41.639 filename=/dev/nvme0n3 00:15:41.639 [job3] 00:15:41.639 filename=/dev/nvme0n4 00:15:41.639 Could not set queue depth (nvme0n1) 00:15:41.639 Could not set queue depth (nvme0n2) 00:15:41.639 Could not set queue depth (nvme0n3) 00:15:41.639 Could not set queue depth (nvme0n4) 00:15:41.896 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.896 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.896 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.896 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:41.896 fio-3.35 00:15:41.896 Starting 4 threads 00:15:44.431 20:41:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:44.688 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=28606464, buflen=4096 00:15:44.688 fio: pid=2675671, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:44.688 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:44.945 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=35258368, buflen=4096 00:15:44.945 fio: pid=2675666, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:44.945 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:44.945 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:44.945 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:44.945 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:44.945 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=323584, buflen=4096 00:15:44.945 fio: pid=2675639, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:45.202 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:45.202 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:45.202 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=23011328, buflen=4096 00:15:45.202 fio: pid=2675649, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:45.202 00:15:45.202 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2675639: Mon Jul 15 20:41:19 2024 00:15:45.202 read: IOPS=25, BW=102KiB/s (104kB/s)(316KiB/3107msec) 00:15:45.202 slat (nsec): min=6230, max=78343, avg=22605.56, stdev=8540.44 00:15:45.202 clat (usec): min=363, max=42052, avg=39041.58, stdev=8960.85 00:15:45.202 lat (usec): min=385, max=42074, avg=39064.18, stdev=8958.51 00:15:45.202 clat percentiles (usec): 00:15:45.202 | 1.00th=[ 363], 5.00th=[ 586], 10.00th=[40633], 20.00th=[41157], 00:15:45.202 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:45.202 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:45.202 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:45.202 | 99.99th=[42206] 00:15:45.202 bw ( KiB/s): min= 96, max= 112, per=0.39%, avg=101.33, stdev= 6.53, samples=6 00:15:45.202 iops : min= 24, max= 28, avg=25.33, stdev= 1.63, samples=6 00:15:45.202 lat (usec) : 500=1.25%, 750=3.75% 00:15:45.202 lat (msec) : 50=93.75% 00:15:45.202 cpu : usr=0.03%, sys=0.03%, ctx=82, majf=0, minf=1 00:15:45.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.202 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.202 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.202 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2675649: Mon Jul 15 20:41:19 2024 00:15:45.202 read: IOPS=1693, BW=6773KiB/s (6935kB/s)(21.9MiB/3318msec) 00:15:45.202 slat (usec): min=6, max=15762, avg=12.36, stdev=239.92 00:15:45.202 clat (usec): min=208, max=42031, avg=572.38, stdev=3365.89 00:15:45.202 lat (usec): min=218, max=57010, avg=584.74, stdev=3427.66 00:15:45.202 clat percentiles (usec): 00:15:45.202 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 262], 00:15:45.202 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:15:45.202 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 359], 00:15:45.202 | 99.00th=[ 412], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:45.202 | 99.99th=[42206] 00:15:45.202 bw ( KiB/s): min= 96, max=13072, per=28.13%, avg=7219.83, stdev=5875.56, samples=6 00:15:45.202 iops : min= 24, max= 3268, avg=1804.83, stdev=1468.88, samples=6 00:15:45.202 lat (usec) : 250=10.14%, 500=89.07%, 750=0.04% 00:15:45.202 lat (msec) : 2=0.04%, 20=0.02%, 50=0.68% 00:15:45.202 cpu : usr=0.93%, sys=2.77%, ctx=5622, majf=0, minf=1 00:15:45.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.202 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.202 issued rwts: total=5619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.202 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2675666: Mon Jul 15 20:41:19 2024 00:15:45.202 read: IOPS=2962, BW=11.6MiB/s (12.1MB/s)(33.6MiB/2906msec) 00:15:45.202 slat (nsec): min=7151, max=39348, avg=8177.64, stdev=1312.01 00:15:45.202 clat (usec): min=263, max=1667, avg=324.99, stdev=45.65 00:15:45.202 lat (usec): min=270, max=1679, avg=333.17, stdev=45.71 00:15:45.202 clat percentiles (usec): 00:15:45.202 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:15:45.202 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:15:45.202 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 433], 00:15:45.202 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 519], 99.95th=[ 1139], 00:15:45.202 | 99.99th=[ 1663] 00:15:45.202 bw ( KiB/s): min=10752, max=12456, per=46.37%, avg=11902.40, stdev=686.59, samples=5 00:15:45.202 iops : min= 2688, max= 3114, avg=2975.60, stdev=171.65, samples=5 00:15:45.202 lat (usec) : 500=99.59%, 750=0.34% 00:15:45.202 lat (msec) : 2=0.06% 00:15:45.202 cpu : usr=1.38%, sys=5.16%, ctx=8609, majf=0, minf=1 00:15:45.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.202 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.202 issued rwts: total=8609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.202 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2675671: Mon Jul 15 20:41:19 2024 00:15:45.202 read: IOPS=2578, BW=10.1MiB/s (10.6MB/s)(27.3MiB/2709msec) 00:15:45.202 slat (nsec): min=7172, max=42303, avg=8165.21, stdev=1282.22 00:15:45.202 clat (usec): min=330, max=535, avg=374.58, stdev=16.77 00:15:45.202 lat (usec): min=338, max=543, avg=382.74, stdev=16.86 00:15:45.202 clat percentiles (usec): 00:15:45.202 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:15:45.202 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 375], 00:15:45.202 | 70.00th=[ 379], 80.00th=[ 383], 90.00th=[ 392], 95.00th=[ 396], 00:15:45.202 | 99.00th=[ 420], 99.50th=[ 490], 99.90th=[ 519], 99.95th=[ 529], 00:15:45.202 | 99.99th=[ 537] 00:15:45.202 bw ( KiB/s): min=10296, max=10504, per=40.57%, avg=10412.80, stdev=78.71, samples=5 00:15:45.202 iops : min= 2574, max= 2626, avg=2603.20, stdev=19.68, samples=5 00:15:45.202 lat (usec) : 500=99.70%, 750=0.29% 00:15:45.202 cpu : usr=1.62%, sys=4.10%, ctx=6985, majf=0, minf=2 00:15:45.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.202 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.202 issued rwts: total=6985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.202 00:15:45.202 Run status group 0 (all jobs): 00:15:45.202 READ: bw=25.1MiB/s (26.3MB/s), 102KiB/s-11.6MiB/s (104kB/s-12.1MB/s), io=83.2MiB (87.2MB), run=2709-3318msec 00:15:45.202 00:15:45.202 Disk stats (read/write): 00:15:45.202 nvme0n1: ios=80/0, merge=0/0, ticks=3095/0, in_queue=3095, util=95.41% 00:15:45.202 nvme0n2: ios=5613/0, merge=0/0, ticks=2945/0, in_queue=2945, util=95.82% 00:15:45.202 nvme0n3: ios=8510/0, merge=0/0, ticks=2664/0, in_queue=2664, util=96.52% 00:15:45.202 nvme0n4: ios=6771/0, merge=0/0, ticks=2470/0, in_queue=2470, util=96.45% 00:15:45.459 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:45.459 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:45.716 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:45.716 20:41:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:45.716 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:45.716 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:45.973 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:45.973 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2675380 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:46.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:46.229 nvmf hotplug test: fio failed as expected 00:15:46.229 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.486 rmmod nvme_tcp 00:15:46.486 rmmod nvme_fabrics 00:15:46.486 rmmod nvme_keyring 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2672671 ']' 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2672671 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2672671 ']' 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2672671 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.486 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2672671 00:15:46.744 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:46.744 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:46.744 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2672671' 00:15:46.744 killing process with pid 2672671 00:15:46.744 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2672671 00:15:46.744 20:41:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2672671 00:15:46.744 20:41:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.744 20:41:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.744 20:41:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.744 20:41:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.744 20:41:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.744 20:41:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.744 20:41:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.744 20:41:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.269 20:41:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.269 00:15:49.269 real 0m25.819s 00:15:49.269 user 1m46.298s 00:15:49.269 sys 0m7.599s 00:15:49.269 20:41:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.269 20:41:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.269 ************************************ 00:15:49.269 END TEST nvmf_fio_target 00:15:49.269 ************************************ 00:15:49.269 20:41:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:49.269 20:41:23 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:49.270 20:41:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:49.270 20:41:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.270 20:41:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.270 ************************************ 00:15:49.270 START TEST nvmf_bdevio 00:15:49.270 ************************************ 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:49.270 * Looking for test storage... 00:15:49.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.270 20:41:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:54.547 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:54.547 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:54.547 Found net devices under 0000:86:00.0: cvl_0_0 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.547 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:54.548 Found net devices under 0000:86:00.1: cvl_0_1 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:54.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:15:54.548 00:15:54.548 --- 10.0.0.2 ping statistics --- 00:15:54.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.548 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:15:54.548 00:15:54.548 --- 10.0.0.1 ping statistics --- 00:15:54.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.548 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2679780 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2679780 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2679780 ']' 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.548 20:41:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:54.548 [2024-07-15 20:41:28.772879] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:15:54.548 [2024-07-15 20:41:28.772926] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.548 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.548 [2024-07-15 20:41:28.831467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.548 [2024-07-15 20:41:28.911100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.548 [2024-07-15 20:41:28.911138] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.548 [2024-07-15 20:41:28.911146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.548 [2024-07-15 20:41:28.911151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.548 [2024-07-15 20:41:28.911157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.548 [2024-07-15 20:41:28.911275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:54.548 [2024-07-15 20:41:28.911385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:54.548 [2024-07-15 20:41:28.911492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.548 [2024-07-15 20:41:28.911493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:55.121 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.121 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:55.121 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.121 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.121 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:55.379 [2024-07-15 20:41:29.621271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:55.379 Malloc0 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:55.379 [2024-07-15 20:41:29.672951] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:55.379 { 00:15:55.379 "params": { 00:15:55.379 "name": "Nvme$subsystem", 00:15:55.379 "trtype": "$TEST_TRANSPORT", 00:15:55.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:55.379 "adrfam": "ipv4", 00:15:55.379 "trsvcid": "$NVMF_PORT", 00:15:55.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:55.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:55.379 "hdgst": ${hdgst:-false}, 00:15:55.379 "ddgst": ${ddgst:-false} 00:15:55.379 }, 00:15:55.379 "method": "bdev_nvme_attach_controller" 00:15:55.379 } 00:15:55.379 EOF 00:15:55.379 )") 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:55.379 20:41:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:55.379 "params": { 00:15:55.379 "name": "Nvme1", 00:15:55.379 "trtype": "tcp", 00:15:55.379 "traddr": "10.0.0.2", 00:15:55.379 "adrfam": "ipv4", 00:15:55.379 "trsvcid": "4420", 00:15:55.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:55.379 "hdgst": false, 00:15:55.379 "ddgst": false 00:15:55.379 }, 00:15:55.379 "method": "bdev_nvme_attach_controller" 00:15:55.379 }' 00:15:55.379 [2024-07-15 20:41:29.720940] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:15:55.379 [2024-07-15 20:41:29.720983] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680009 ] 00:15:55.379 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.379 [2024-07-15 20:41:29.775660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.379 [2024-07-15 20:41:29.851329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.379 [2024-07-15 20:41:29.851344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.379 [2024-07-15 20:41:29.851346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.941 I/O targets: 00:15:55.941 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:55.941 00:15:55.941 00:15:55.941 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.941 http://cunit.sourceforge.net/ 00:15:55.941 00:15:55.941 00:15:55.941 Suite: bdevio tests on: Nvme1n1 00:15:55.941 Test: blockdev write read block ...passed 00:15:55.941 Test: blockdev write zeroes read block ...passed 00:15:55.941 Test: blockdev write zeroes read no split ...passed 00:15:55.941 Test: blockdev write zeroes read split ...passed 00:15:55.941 Test: blockdev write zeroes read split partial ...passed 00:15:55.941 Test: blockdev reset ...[2024-07-15 20:41:30.374912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:55.941 [2024-07-15 20:41:30.374975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11206d0 (9): Bad file descriptor 00:15:55.941 [2024-07-15 20:41:30.391296] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:55.941 passed 00:15:55.941 Test: blockdev write read 8 blocks ...passed 00:15:55.941 Test: blockdev write read size > 128k ...passed 00:15:55.941 Test: blockdev write read invalid size ...passed 00:15:56.198 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.198 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.198 Test: blockdev write read max offset ...passed 00:15:56.198 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.198 Test: blockdev writev readv 8 blocks ...passed 00:15:56.198 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.198 Test: blockdev writev readv block ...passed 00:15:56.198 Test: blockdev writev readv size > 128k ...passed 00:15:56.198 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.198 Test: blockdev comparev and writev ...[2024-07-15 20:41:30.562583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.198 [2024-07-15 20:41:30.562612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.562626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.198 [2024-07-15 20:41:30.562635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.562900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.198 [2024-07-15 20:41:30.562911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.562923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.198 [2024-07-15 20:41:30.562930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.563208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.198 [2024-07-15 20:41:30.563219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.563234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.198 [2024-07-15 20:41:30.563247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.563517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.198 [2024-07-15 20:41:30.563530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.563542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.198 [2024-07-15 20:41:30.563549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:56.198 passed 00:15:56.198 Test: blockdev nvme passthru rw ...passed 00:15:56.198 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:41:30.645678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:56.198 [2024-07-15 20:41:30.645693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.645842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:56.198 [2024-07-15 20:41:30.645853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.645996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:56.198 [2024-07-15 20:41:30.646006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:56.198 [2024-07-15 20:41:30.646149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:56.198 [2024-07-15 20:41:30.646159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:56.198 passed 00:15:56.198 Test: blockdev nvme admin passthru ...passed 00:15:56.455 Test: blockdev copy ...passed 00:15:56.455 00:15:56.455 Run Summary: Type Total Ran Passed Failed Inactive 00:15:56.455 suites 1 1 n/a 0 0 00:15:56.455 tests 23 23 23 0 0 00:15:56.455 asserts 152 152 152 0 n/a 00:15:56.455 00:15:56.455 Elapsed time = 1.069 seconds 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.455 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.455 rmmod nvme_tcp 00:15:56.455 rmmod nvme_fabrics 00:15:56.455 rmmod nvme_keyring 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2679780 ']' 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2679780 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2679780 ']' 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2679780 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2679780 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2679780' 00:15:56.712 killing process with pid 2679780 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2679780 00:15:56.712 20:41:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2679780 00:15:56.970 20:41:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.970 20:41:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.970 20:41:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.970 20:41:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.970 20:41:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.970 20:41:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.970 20:41:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.970 20:41:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.871 20:41:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.871 00:15:58.871 real 0m9.966s 00:15:58.871 user 0m13.028s 00:15:58.871 sys 0m4.472s 00:15:58.871 20:41:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.871 20:41:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:58.871 ************************************ 00:15:58.871 END TEST nvmf_bdevio 00:15:58.871 ************************************ 00:15:58.871 20:41:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:58.871 20:41:33 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:58.871 20:41:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:58.871 20:41:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.871 20:41:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.871 ************************************ 00:15:58.871 START TEST nvmf_auth_target 00:15:58.871 ************************************ 00:15:58.871 20:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:59.129 * Looking for test storage... 00:15:59.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.129 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.130 20:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.387 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.387 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.387 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:04.388 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:04.388 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:04.388 Found net devices under 0000:86:00.0: cvl_0_0 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:04.388 Found net devices under 0000:86:00.1: cvl_0_1 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:16:04.388 00:16:04.388 --- 10.0.0.2 ping statistics --- 00:16:04.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.388 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:16:04.388 00:16:04.388 --- 10.0.0.1 ping statistics --- 00:16:04.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.388 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2683526 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2683526 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2683526 ']' 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.388 20:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2683773 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=34dc4e57ae120044bb6ab655b5d12257eff06b1b9c6cdd32 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1rd 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 34dc4e57ae120044bb6ab655b5d12257eff06b1b9c6cdd32 0 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 34dc4e57ae120044bb6ab655b5d12257eff06b1b9c6cdd32 0 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=34dc4e57ae120044bb6ab655b5d12257eff06b1b9c6cdd32 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1rd 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1rd 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1rd 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a36784c4d32a98ffcca69933c4ecacee31bd6aa1cedd5aa12d67ef83973833dc 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.HtR 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a36784c4d32a98ffcca69933c4ecacee31bd6aa1cedd5aa12d67ef83973833dc 3 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a36784c4d32a98ffcca69933c4ecacee31bd6aa1cedd5aa12d67ef83973833dc 3 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a36784c4d32a98ffcca69933c4ecacee31bd6aa1cedd5aa12d67ef83973833dc 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.HtR 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.HtR 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.HtR 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fc1410ad17e1db640f06ea65a6b22fa3 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OgZ 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fc1410ad17e1db640f06ea65a6b22fa3 1 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fc1410ad17e1db640f06ea65a6b22fa3 1 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fc1410ad17e1db640f06ea65a6b22fa3 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OgZ 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OgZ 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.OgZ 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.952 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b41bd3aefc0a546102bf52900737837b2f53a9a8b1fc69d4 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7FD 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b41bd3aefc0a546102bf52900737837b2f53a9a8b1fc69d4 2 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b41bd3aefc0a546102bf52900737837b2f53a9a8b1fc69d4 2 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b41bd3aefc0a546102bf52900737837b2f53a9a8b1fc69d4 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7FD 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7FD 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.7FD 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ada1d7df235f0da6d09b19d18f7a29de63ed627046db864e 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mue 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ada1d7df235f0da6d09b19d18f7a29de63ed627046db864e 2 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ada1d7df235f0da6d09b19d18f7a29de63ed627046db864e 2 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ada1d7df235f0da6d09b19d18f7a29de63ed627046db864e 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:04.953 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mue 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mue 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.mue 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4cc1b86a39adc12b47852e1bb7ff0d31 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rf9 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4cc1b86a39adc12b47852e1bb7ff0d31 1 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4cc1b86a39adc12b47852e1bb7ff0d31 1 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4cc1b86a39adc12b47852e1bb7ff0d31 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rf9 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rf9 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.rf9 00:16:05.209 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf63ca1230c8a674fa34a2a6c7fde8cddb6d966b1dbb3b6f05e5cb5dfe341e16 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Bcn 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf63ca1230c8a674fa34a2a6c7fde8cddb6d966b1dbb3b6f05e5cb5dfe341e16 3 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf63ca1230c8a674fa34a2a6c7fde8cddb6d966b1dbb3b6f05e5cb5dfe341e16 3 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf63ca1230c8a674fa34a2a6c7fde8cddb6d966b1dbb3b6f05e5cb5dfe341e16 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Bcn 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Bcn 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Bcn 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2683526 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2683526 ']' 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.210 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2683773 /var/tmp/host.sock 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2683773 ']' 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:05.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1rd 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1rd 00:16:05.466 20:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1rd 00:16:05.721 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.HtR ]] 00:16:05.721 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HtR 00:16:05.721 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.721 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.721 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.721 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HtR 00:16:05.721 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HtR 00:16:05.976 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:05.977 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OgZ 00:16:05.977 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.977 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.977 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.977 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.OgZ 00:16:05.977 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.OgZ 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.7FD ]] 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7FD 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7FD 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7FD 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mue 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.232 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.233 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.mue 00:16:06.233 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.mue 00:16:06.489 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.rf9 ]] 00:16:06.489 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rf9 00:16:06.489 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.489 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.489 20:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.489 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rf9 00:16:06.489 20:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rf9 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Bcn 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Bcn 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Bcn 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:06.746 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.003 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.004 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.261 00:16:07.261 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.261 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.261 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.519 { 00:16:07.519 "cntlid": 1, 00:16:07.519 "qid": 0, 00:16:07.519 "state": "enabled", 00:16:07.519 "thread": "nvmf_tgt_poll_group_000", 00:16:07.519 "listen_address": { 00:16:07.519 "trtype": "TCP", 00:16:07.519 "adrfam": "IPv4", 00:16:07.519 "traddr": "10.0.0.2", 00:16:07.519 "trsvcid": "4420" 00:16:07.519 }, 00:16:07.519 "peer_address": { 00:16:07.519 "trtype": "TCP", 00:16:07.519 "adrfam": "IPv4", 00:16:07.519 "traddr": "10.0.0.1", 00:16:07.519 "trsvcid": "45686" 00:16:07.519 }, 00:16:07.519 "auth": { 00:16:07.519 "state": "completed", 00:16:07.519 "digest": "sha256", 00:16:07.519 "dhgroup": "null" 00:16:07.519 } 00:16:07.519 } 00:16:07.519 ]' 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.519 20:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.776 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.340 20:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.615 00:16:08.615 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.615 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.615 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.873 { 00:16:08.873 "cntlid": 3, 00:16:08.873 "qid": 0, 00:16:08.873 "state": "enabled", 00:16:08.873 "thread": "nvmf_tgt_poll_group_000", 00:16:08.873 "listen_address": { 00:16:08.873 "trtype": "TCP", 00:16:08.873 "adrfam": "IPv4", 00:16:08.873 "traddr": "10.0.0.2", 00:16:08.873 "trsvcid": "4420" 00:16:08.873 }, 00:16:08.873 "peer_address": { 00:16:08.873 "trtype": "TCP", 00:16:08.873 "adrfam": "IPv4", 00:16:08.873 "traddr": "10.0.0.1", 00:16:08.873 "trsvcid": "45714" 00:16:08.873 }, 00:16:08.873 "auth": { 00:16:08.873 "state": "completed", 00:16:08.873 "digest": "sha256", 00:16:08.873 "dhgroup": "null" 00:16:08.873 } 00:16:08.873 } 00:16:08.873 ]' 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.873 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.131 20:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:09.695 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.695 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.695 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.695 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.695 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.695 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.695 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.695 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.971 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.229 00:16:10.229 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.229 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.229 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.229 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.229 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.229 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.229 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.487 { 00:16:10.487 "cntlid": 5, 00:16:10.487 "qid": 0, 00:16:10.487 "state": "enabled", 00:16:10.487 "thread": "nvmf_tgt_poll_group_000", 00:16:10.487 "listen_address": { 00:16:10.487 "trtype": "TCP", 00:16:10.487 "adrfam": "IPv4", 00:16:10.487 "traddr": "10.0.0.2", 00:16:10.487 "trsvcid": "4420" 00:16:10.487 }, 00:16:10.487 "peer_address": { 00:16:10.487 "trtype": "TCP", 00:16:10.487 "adrfam": "IPv4", 00:16:10.487 "traddr": "10.0.0.1", 00:16:10.487 "trsvcid": "45740" 00:16:10.487 }, 00:16:10.487 "auth": { 00:16:10.487 "state": "completed", 00:16:10.487 "digest": "sha256", 00:16:10.487 "dhgroup": "null" 00:16:10.487 } 00:16:10.487 } 00:16:10.487 ]' 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.487 20:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.745 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:16:11.312 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.312 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.312 20:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.312 20:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.312 20:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.312 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.312 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.313 20:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.570 00:16:11.570 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.570 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.570 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.830 { 00:16:11.830 "cntlid": 7, 00:16:11.830 "qid": 0, 00:16:11.830 "state": "enabled", 00:16:11.830 "thread": "nvmf_tgt_poll_group_000", 00:16:11.830 "listen_address": { 00:16:11.830 "trtype": "TCP", 00:16:11.830 "adrfam": "IPv4", 00:16:11.830 "traddr": "10.0.0.2", 00:16:11.830 "trsvcid": "4420" 00:16:11.830 }, 00:16:11.830 "peer_address": { 00:16:11.830 "trtype": "TCP", 00:16:11.830 "adrfam": "IPv4", 00:16:11.830 "traddr": "10.0.0.1", 00:16:11.830 "trsvcid": "45772" 00:16:11.830 }, 00:16:11.830 "auth": { 00:16:11.830 "state": "completed", 00:16:11.830 "digest": "sha256", 00:16:11.830 "dhgroup": "null" 00:16:11.830 } 00:16:11.830 } 00:16:11.830 ]' 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:11.830 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.154 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.154 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.154 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.154 20:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.719 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.977 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.235 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.235 { 00:16:13.235 "cntlid": 9, 00:16:13.235 "qid": 0, 00:16:13.235 "state": "enabled", 00:16:13.235 "thread": "nvmf_tgt_poll_group_000", 00:16:13.235 "listen_address": { 00:16:13.235 "trtype": "TCP", 00:16:13.235 "adrfam": "IPv4", 00:16:13.235 "traddr": "10.0.0.2", 00:16:13.235 "trsvcid": "4420" 00:16:13.235 }, 00:16:13.235 "peer_address": { 00:16:13.235 "trtype": "TCP", 00:16:13.235 "adrfam": "IPv4", 00:16:13.235 "traddr": "10.0.0.1", 00:16:13.235 "trsvcid": "47850" 00:16:13.235 }, 00:16:13.235 "auth": { 00:16:13.235 "state": "completed", 00:16:13.235 "digest": "sha256", 00:16:13.235 "dhgroup": "ffdhe2048" 00:16:13.235 } 00:16:13.235 } 00:16:13.235 ]' 00:16:13.235 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.493 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.493 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.493 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.493 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.493 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.493 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.493 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.751 20:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.334 20:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.591 00:16:14.592 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.592 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.592 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.849 { 00:16:14.849 "cntlid": 11, 00:16:14.849 "qid": 0, 00:16:14.849 "state": "enabled", 00:16:14.849 "thread": "nvmf_tgt_poll_group_000", 00:16:14.849 "listen_address": { 00:16:14.849 "trtype": "TCP", 00:16:14.849 "adrfam": "IPv4", 00:16:14.849 "traddr": "10.0.0.2", 00:16:14.849 "trsvcid": "4420" 00:16:14.849 }, 00:16:14.849 "peer_address": { 00:16:14.849 "trtype": "TCP", 00:16:14.849 "adrfam": "IPv4", 00:16:14.849 "traddr": "10.0.0.1", 00:16:14.849 "trsvcid": "47872" 00:16:14.849 }, 00:16:14.849 "auth": { 00:16:14.849 "state": "completed", 00:16:14.849 "digest": "sha256", 00:16:14.849 "dhgroup": "ffdhe2048" 00:16:14.849 } 00:16:14.849 } 00:16:14.849 ]' 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.849 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.107 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.107 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.107 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.107 20:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:15.673 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.673 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.673 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.673 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.673 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.673 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.673 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.673 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.931 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.189 00:16:16.189 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.189 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.189 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.447 { 00:16:16.447 "cntlid": 13, 00:16:16.447 "qid": 0, 00:16:16.447 "state": "enabled", 00:16:16.447 "thread": "nvmf_tgt_poll_group_000", 00:16:16.447 "listen_address": { 00:16:16.447 "trtype": "TCP", 00:16:16.447 "adrfam": "IPv4", 00:16:16.447 "traddr": "10.0.0.2", 00:16:16.447 "trsvcid": "4420" 00:16:16.447 }, 00:16:16.447 "peer_address": { 00:16:16.447 "trtype": "TCP", 00:16:16.447 "adrfam": "IPv4", 00:16:16.447 "traddr": "10.0.0.1", 00:16:16.447 "trsvcid": "47902" 00:16:16.447 }, 00:16:16.447 "auth": { 00:16:16.447 "state": "completed", 00:16:16.447 "digest": "sha256", 00:16:16.447 "dhgroup": "ffdhe2048" 00:16:16.447 } 00:16:16.447 } 00:16:16.447 ]' 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.447 20:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.704 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:16:17.269 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.269 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.269 20:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.269 20:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.269 20:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.269 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.269 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.269 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.527 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.528 20:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.784 00:16:17.784 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.784 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.784 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.041 { 00:16:18.041 "cntlid": 15, 00:16:18.041 "qid": 0, 00:16:18.041 "state": "enabled", 00:16:18.041 "thread": "nvmf_tgt_poll_group_000", 00:16:18.041 "listen_address": { 00:16:18.041 "trtype": "TCP", 00:16:18.041 "adrfam": "IPv4", 00:16:18.041 "traddr": "10.0.0.2", 00:16:18.041 "trsvcid": "4420" 00:16:18.041 }, 00:16:18.041 "peer_address": { 00:16:18.041 "trtype": "TCP", 00:16:18.041 "adrfam": "IPv4", 00:16:18.041 "traddr": "10.0.0.1", 00:16:18.041 "trsvcid": "47932" 00:16:18.041 }, 00:16:18.041 "auth": { 00:16:18.041 "state": "completed", 00:16:18.041 "digest": "sha256", 00:16:18.041 "dhgroup": "ffdhe2048" 00:16:18.041 } 00:16:18.041 } 00:16:18.041 ]' 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.041 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.299 20:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:16:18.863 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.863 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.863 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.863 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.863 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.863 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.864 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.864 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.864 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.121 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.121 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.378 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.378 { 00:16:19.378 "cntlid": 17, 00:16:19.378 "qid": 0, 00:16:19.378 "state": "enabled", 00:16:19.378 "thread": "nvmf_tgt_poll_group_000", 00:16:19.378 "listen_address": { 00:16:19.378 "trtype": "TCP", 00:16:19.378 "adrfam": "IPv4", 00:16:19.378 "traddr": "10.0.0.2", 00:16:19.378 "trsvcid": "4420" 00:16:19.378 }, 00:16:19.378 "peer_address": { 00:16:19.378 "trtype": "TCP", 00:16:19.378 "adrfam": "IPv4", 00:16:19.378 "traddr": "10.0.0.1", 00:16:19.378 "trsvcid": "47964" 00:16:19.378 }, 00:16:19.378 "auth": { 00:16:19.378 "state": "completed", 00:16:19.378 "digest": "sha256", 00:16:19.378 "dhgroup": "ffdhe3072" 00:16:19.378 } 00:16:19.378 } 00:16:19.378 ]' 00:16:19.379 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.379 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.379 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.636 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.636 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.636 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.636 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.636 20:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.636 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.570 20:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.831 00:16:20.831 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.831 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.831 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.090 { 00:16:21.090 "cntlid": 19, 00:16:21.090 "qid": 0, 00:16:21.090 "state": "enabled", 00:16:21.090 "thread": "nvmf_tgt_poll_group_000", 00:16:21.090 "listen_address": { 00:16:21.090 "trtype": "TCP", 00:16:21.090 "adrfam": "IPv4", 00:16:21.090 "traddr": "10.0.0.2", 00:16:21.090 "trsvcid": "4420" 00:16:21.090 }, 00:16:21.090 "peer_address": { 00:16:21.090 "trtype": "TCP", 00:16:21.090 "adrfam": "IPv4", 00:16:21.090 "traddr": "10.0.0.1", 00:16:21.090 "trsvcid": "47996" 00:16:21.090 }, 00:16:21.090 "auth": { 00:16:21.090 "state": "completed", 00:16:21.090 "digest": "sha256", 00:16:21.090 "dhgroup": "ffdhe3072" 00:16:21.090 } 00:16:21.090 } 00:16:21.090 ]' 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.090 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.349 20:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.916 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.175 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.175 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.175 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.175 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.175 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.175 00:16:22.175 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.175 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.175 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.433 { 00:16:22.433 "cntlid": 21, 00:16:22.433 "qid": 0, 00:16:22.433 "state": "enabled", 00:16:22.433 "thread": "nvmf_tgt_poll_group_000", 00:16:22.433 "listen_address": { 00:16:22.433 "trtype": "TCP", 00:16:22.433 "adrfam": "IPv4", 00:16:22.433 "traddr": "10.0.0.2", 00:16:22.433 "trsvcid": "4420" 00:16:22.433 }, 00:16:22.433 "peer_address": { 00:16:22.433 "trtype": "TCP", 00:16:22.433 "adrfam": "IPv4", 00:16:22.433 "traddr": "10.0.0.1", 00:16:22.433 "trsvcid": "48026" 00:16:22.433 }, 00:16:22.433 "auth": { 00:16:22.433 "state": "completed", 00:16:22.433 "digest": "sha256", 00:16:22.433 "dhgroup": "ffdhe3072" 00:16:22.433 } 00:16:22.433 } 00:16:22.433 ]' 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.433 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.692 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.692 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.692 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.692 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.692 20:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.692 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:16:23.258 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.258 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.258 20:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.258 20:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.258 20:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.258 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.258 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.258 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.516 20:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.517 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.517 20:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.775 00:16:23.775 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.775 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.775 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.033 { 00:16:24.033 "cntlid": 23, 00:16:24.033 "qid": 0, 00:16:24.033 "state": "enabled", 00:16:24.033 "thread": "nvmf_tgt_poll_group_000", 00:16:24.033 "listen_address": { 00:16:24.033 "trtype": "TCP", 00:16:24.033 "adrfam": "IPv4", 00:16:24.033 "traddr": "10.0.0.2", 00:16:24.033 "trsvcid": "4420" 00:16:24.033 }, 00:16:24.033 "peer_address": { 00:16:24.033 "trtype": "TCP", 00:16:24.033 "adrfam": "IPv4", 00:16:24.033 "traddr": "10.0.0.1", 00:16:24.033 "trsvcid": "60366" 00:16:24.033 }, 00:16:24.033 "auth": { 00:16:24.033 "state": "completed", 00:16:24.033 "digest": "sha256", 00:16:24.033 "dhgroup": "ffdhe3072" 00:16:24.033 } 00:16:24.033 } 00:16:24.033 ]' 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.033 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.293 20:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:24.861 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.120 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.379 00:16:25.379 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.379 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.379 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.638 { 00:16:25.638 "cntlid": 25, 00:16:25.638 "qid": 0, 00:16:25.638 "state": "enabled", 00:16:25.638 "thread": "nvmf_tgt_poll_group_000", 00:16:25.638 "listen_address": { 00:16:25.638 "trtype": "TCP", 00:16:25.638 "adrfam": "IPv4", 00:16:25.638 "traddr": "10.0.0.2", 00:16:25.638 "trsvcid": "4420" 00:16:25.638 }, 00:16:25.638 "peer_address": { 00:16:25.638 "trtype": "TCP", 00:16:25.638 "adrfam": "IPv4", 00:16:25.638 "traddr": "10.0.0.1", 00:16:25.638 "trsvcid": "60394" 00:16:25.638 }, 00:16:25.638 "auth": { 00:16:25.638 "state": "completed", 00:16:25.638 "digest": "sha256", 00:16:25.638 "dhgroup": "ffdhe4096" 00:16:25.638 } 00:16:25.638 } 00:16:25.638 ]' 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.638 20:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.897 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.481 20:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.755 00:16:26.755 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.755 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.755 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.013 { 00:16:27.013 "cntlid": 27, 00:16:27.013 "qid": 0, 00:16:27.013 "state": "enabled", 00:16:27.013 "thread": "nvmf_tgt_poll_group_000", 00:16:27.013 "listen_address": { 00:16:27.013 "trtype": "TCP", 00:16:27.013 "adrfam": "IPv4", 00:16:27.013 "traddr": "10.0.0.2", 00:16:27.013 "trsvcid": "4420" 00:16:27.013 }, 00:16:27.013 "peer_address": { 00:16:27.013 "trtype": "TCP", 00:16:27.013 "adrfam": "IPv4", 00:16:27.013 "traddr": "10.0.0.1", 00:16:27.013 "trsvcid": "60414" 00:16:27.013 }, 00:16:27.013 "auth": { 00:16:27.013 "state": "completed", 00:16:27.013 "digest": "sha256", 00:16:27.013 "dhgroup": "ffdhe4096" 00:16:27.013 } 00:16:27.013 } 00:16:27.013 ]' 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.013 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.271 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.271 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.271 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.272 20:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:27.840 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.841 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.841 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.841 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.841 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.841 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.841 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.841 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.100 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.359 00:16:28.359 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.359 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.359 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.750 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.750 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.750 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.750 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.750 20:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.750 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.750 { 00:16:28.750 "cntlid": 29, 00:16:28.750 "qid": 0, 00:16:28.750 "state": "enabled", 00:16:28.750 "thread": "nvmf_tgt_poll_group_000", 00:16:28.750 "listen_address": { 00:16:28.750 "trtype": "TCP", 00:16:28.750 "adrfam": "IPv4", 00:16:28.751 "traddr": "10.0.0.2", 00:16:28.751 "trsvcid": "4420" 00:16:28.751 }, 00:16:28.751 "peer_address": { 00:16:28.751 "trtype": "TCP", 00:16:28.751 "adrfam": "IPv4", 00:16:28.751 "traddr": "10.0.0.1", 00:16:28.751 "trsvcid": "60448" 00:16:28.751 }, 00:16:28.751 "auth": { 00:16:28.751 "state": "completed", 00:16:28.751 "digest": "sha256", 00:16:28.751 "dhgroup": "ffdhe4096" 00:16:28.751 } 00:16:28.751 } 00:16:28.751 ]' 00:16:28.751 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.751 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.751 20:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.751 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.751 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.751 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.751 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.751 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.009 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:16:29.575 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.575 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.575 20:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.575 20:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.575 20:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.575 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.575 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.575 20:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.575 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.834 00:16:29.834 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.835 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.835 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.094 { 00:16:30.094 "cntlid": 31, 00:16:30.094 "qid": 0, 00:16:30.094 "state": "enabled", 00:16:30.094 "thread": "nvmf_tgt_poll_group_000", 00:16:30.094 "listen_address": { 00:16:30.094 "trtype": "TCP", 00:16:30.094 "adrfam": "IPv4", 00:16:30.094 "traddr": "10.0.0.2", 00:16:30.094 "trsvcid": "4420" 00:16:30.094 }, 00:16:30.094 "peer_address": { 00:16:30.094 "trtype": "TCP", 00:16:30.094 "adrfam": "IPv4", 00:16:30.094 "traddr": "10.0.0.1", 00:16:30.094 "trsvcid": "60482" 00:16:30.094 }, 00:16:30.094 "auth": { 00:16:30.094 "state": "completed", 00:16:30.094 "digest": "sha256", 00:16:30.094 "dhgroup": "ffdhe4096" 00:16:30.094 } 00:16:30.094 } 00:16:30.094 ]' 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.094 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.353 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.353 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.353 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.353 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.353 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.353 20:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:16:30.921 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.921 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.921 20:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.921 20:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.921 20:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.921 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.921 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.921 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:30.922 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.181 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.439 00:16:31.440 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.440 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.440 20:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.698 { 00:16:31.698 "cntlid": 33, 00:16:31.698 "qid": 0, 00:16:31.698 "state": "enabled", 00:16:31.698 "thread": "nvmf_tgt_poll_group_000", 00:16:31.698 "listen_address": { 00:16:31.698 "trtype": "TCP", 00:16:31.698 "adrfam": "IPv4", 00:16:31.698 "traddr": "10.0.0.2", 00:16:31.698 "trsvcid": "4420" 00:16:31.698 }, 00:16:31.698 "peer_address": { 00:16:31.698 "trtype": "TCP", 00:16:31.698 "adrfam": "IPv4", 00:16:31.698 "traddr": "10.0.0.1", 00:16:31.698 "trsvcid": "60506" 00:16:31.698 }, 00:16:31.698 "auth": { 00:16:31.698 "state": "completed", 00:16:31.698 "digest": "sha256", 00:16:31.698 "dhgroup": "ffdhe6144" 00:16:31.698 } 00:16:31.698 } 00:16:31.698 ]' 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.698 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.957 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.957 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.957 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.957 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:32.524 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.524 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.524 20:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.524 20:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.524 20:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.524 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.524 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.524 20:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.783 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.042 00:16:33.042 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.042 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.042 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.301 { 00:16:33.301 "cntlid": 35, 00:16:33.301 "qid": 0, 00:16:33.301 "state": "enabled", 00:16:33.301 "thread": "nvmf_tgt_poll_group_000", 00:16:33.301 "listen_address": { 00:16:33.301 "trtype": "TCP", 00:16:33.301 "adrfam": "IPv4", 00:16:33.301 "traddr": "10.0.0.2", 00:16:33.301 "trsvcid": "4420" 00:16:33.301 }, 00:16:33.301 "peer_address": { 00:16:33.301 "trtype": "TCP", 00:16:33.301 "adrfam": "IPv4", 00:16:33.301 "traddr": "10.0.0.1", 00:16:33.301 "trsvcid": "45956" 00:16:33.301 }, 00:16:33.301 "auth": { 00:16:33.301 "state": "completed", 00:16:33.301 "digest": "sha256", 00:16:33.301 "dhgroup": "ffdhe6144" 00:16:33.301 } 00:16:33.301 } 00:16:33.301 ]' 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.301 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.560 20:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:34.128 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.128 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.128 20:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.128 20:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 20:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.128 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.128 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.128 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.385 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.386 20:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.644 00:16:34.644 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.644 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.644 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.903 { 00:16:34.903 "cntlid": 37, 00:16:34.903 "qid": 0, 00:16:34.903 "state": "enabled", 00:16:34.903 "thread": "nvmf_tgt_poll_group_000", 00:16:34.903 "listen_address": { 00:16:34.903 "trtype": "TCP", 00:16:34.903 "adrfam": "IPv4", 00:16:34.903 "traddr": "10.0.0.2", 00:16:34.903 "trsvcid": "4420" 00:16:34.903 }, 00:16:34.903 "peer_address": { 00:16:34.903 "trtype": "TCP", 00:16:34.903 "adrfam": "IPv4", 00:16:34.903 "traddr": "10.0.0.1", 00:16:34.903 "trsvcid": "45980" 00:16:34.903 }, 00:16:34.903 "auth": { 00:16:34.903 "state": "completed", 00:16:34.903 "digest": "sha256", 00:16:34.903 "dhgroup": "ffdhe6144" 00:16:34.903 } 00:16:34.903 } 00:16:34.903 ]' 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.903 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.162 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.162 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.162 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.162 20:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:16:35.730 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.730 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.730 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.730 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.730 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.730 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.730 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.730 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.990 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.990 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.248 00:16:36.248 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.249 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.249 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.507 { 00:16:36.507 "cntlid": 39, 00:16:36.507 "qid": 0, 00:16:36.507 "state": "enabled", 00:16:36.507 "thread": "nvmf_tgt_poll_group_000", 00:16:36.507 "listen_address": { 00:16:36.507 "trtype": "TCP", 00:16:36.507 "adrfam": "IPv4", 00:16:36.507 "traddr": "10.0.0.2", 00:16:36.507 "trsvcid": "4420" 00:16:36.507 }, 00:16:36.507 "peer_address": { 00:16:36.507 "trtype": "TCP", 00:16:36.507 "adrfam": "IPv4", 00:16:36.507 "traddr": "10.0.0.1", 00:16:36.507 "trsvcid": "46018" 00:16:36.507 }, 00:16:36.507 "auth": { 00:16:36.507 "state": "completed", 00:16:36.507 "digest": "sha256", 00:16:36.507 "dhgroup": "ffdhe6144" 00:16:36.507 } 00:16:36.507 } 00:16:36.507 ]' 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.507 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.766 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.766 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.766 20:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.766 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.335 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.594 20:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.163 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.163 { 00:16:38.163 "cntlid": 41, 00:16:38.163 "qid": 0, 00:16:38.163 "state": "enabled", 00:16:38.163 "thread": "nvmf_tgt_poll_group_000", 00:16:38.163 "listen_address": { 00:16:38.163 "trtype": "TCP", 00:16:38.163 "adrfam": "IPv4", 00:16:38.163 "traddr": "10.0.0.2", 00:16:38.163 "trsvcid": "4420" 00:16:38.163 }, 00:16:38.163 "peer_address": { 00:16:38.163 "trtype": "TCP", 00:16:38.163 "adrfam": "IPv4", 00:16:38.163 "traddr": "10.0.0.1", 00:16:38.163 "trsvcid": "46048" 00:16:38.163 }, 00:16:38.163 "auth": { 00:16:38.163 "state": "completed", 00:16:38.163 "digest": "sha256", 00:16:38.163 "dhgroup": "ffdhe8192" 00:16:38.163 } 00:16:38.163 } 00:16:38.163 ]' 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.163 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.421 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.421 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.421 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.421 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.421 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.421 20:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:38.987 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.987 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.987 20:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.987 20:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.246 20:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.815 00:16:39.815 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.815 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.815 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.074 { 00:16:40.074 "cntlid": 43, 00:16:40.074 "qid": 0, 00:16:40.074 "state": "enabled", 00:16:40.074 "thread": "nvmf_tgt_poll_group_000", 00:16:40.074 "listen_address": { 00:16:40.074 "trtype": "TCP", 00:16:40.074 "adrfam": "IPv4", 00:16:40.074 "traddr": "10.0.0.2", 00:16:40.074 "trsvcid": "4420" 00:16:40.074 }, 00:16:40.074 "peer_address": { 00:16:40.074 "trtype": "TCP", 00:16:40.074 "adrfam": "IPv4", 00:16:40.074 "traddr": "10.0.0.1", 00:16:40.074 "trsvcid": "46086" 00:16:40.074 }, 00:16:40.074 "auth": { 00:16:40.074 "state": "completed", 00:16:40.074 "digest": "sha256", 00:16:40.074 "dhgroup": "ffdhe8192" 00:16:40.074 } 00:16:40.074 } 00:16:40.074 ]' 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.074 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.333 20:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:40.901 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.902 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.902 20:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.902 20:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.902 20:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.902 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.902 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.469 00:16:41.469 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.469 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.469 20:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.727 { 00:16:41.727 "cntlid": 45, 00:16:41.727 "qid": 0, 00:16:41.727 "state": "enabled", 00:16:41.727 "thread": "nvmf_tgt_poll_group_000", 00:16:41.727 "listen_address": { 00:16:41.727 "trtype": "TCP", 00:16:41.727 "adrfam": "IPv4", 00:16:41.727 "traddr": "10.0.0.2", 00:16:41.727 "trsvcid": "4420" 00:16:41.727 }, 00:16:41.727 "peer_address": { 00:16:41.727 "trtype": "TCP", 00:16:41.727 "adrfam": "IPv4", 00:16:41.727 "traddr": "10.0.0.1", 00:16:41.727 "trsvcid": "46116" 00:16:41.727 }, 00:16:41.727 "auth": { 00:16:41.727 "state": "completed", 00:16:41.727 "digest": "sha256", 00:16:41.727 "dhgroup": "ffdhe8192" 00:16:41.727 } 00:16:41.727 } 00:16:41.727 ]' 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.727 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.985 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:16:42.551 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.551 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.551 20:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.551 20:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.551 20:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.551 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.551 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.551 20:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.809 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.067 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.325 { 00:16:43.325 "cntlid": 47, 00:16:43.325 "qid": 0, 00:16:43.325 "state": "enabled", 00:16:43.325 "thread": "nvmf_tgt_poll_group_000", 00:16:43.325 "listen_address": { 00:16:43.325 "trtype": "TCP", 00:16:43.325 "adrfam": "IPv4", 00:16:43.325 "traddr": "10.0.0.2", 00:16:43.325 "trsvcid": "4420" 00:16:43.325 }, 00:16:43.325 "peer_address": { 00:16:43.325 "trtype": "TCP", 00:16:43.325 "adrfam": "IPv4", 00:16:43.325 "traddr": "10.0.0.1", 00:16:43.325 "trsvcid": "39530" 00:16:43.325 }, 00:16:43.325 "auth": { 00:16:43.325 "state": "completed", 00:16:43.325 "digest": "sha256", 00:16:43.325 "dhgroup": "ffdhe8192" 00:16:43.325 } 00:16:43.325 } 00:16:43.325 ]' 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.325 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.584 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.584 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.584 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.584 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.584 20:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.584 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:16:44.151 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.410 20:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.669 00:16:44.669 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.669 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.669 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.927 { 00:16:44.927 "cntlid": 49, 00:16:44.927 "qid": 0, 00:16:44.927 "state": "enabled", 00:16:44.927 "thread": "nvmf_tgt_poll_group_000", 00:16:44.927 "listen_address": { 00:16:44.927 "trtype": "TCP", 00:16:44.927 "adrfam": "IPv4", 00:16:44.927 "traddr": "10.0.0.2", 00:16:44.927 "trsvcid": "4420" 00:16:44.927 }, 00:16:44.927 "peer_address": { 00:16:44.927 "trtype": "TCP", 00:16:44.927 "adrfam": "IPv4", 00:16:44.927 "traddr": "10.0.0.1", 00:16:44.927 "trsvcid": "39560" 00:16:44.927 }, 00:16:44.927 "auth": { 00:16:44.927 "state": "completed", 00:16:44.927 "digest": "sha384", 00:16:44.927 "dhgroup": "null" 00:16:44.927 } 00:16:44.927 } 00:16:44.927 ]' 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.927 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.186 20:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:45.754 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.754 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.754 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.754 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.754 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.754 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.754 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:45.754 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.013 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.271 00:16:46.271 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.271 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.271 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.530 { 00:16:46.530 "cntlid": 51, 00:16:46.530 "qid": 0, 00:16:46.530 "state": "enabled", 00:16:46.530 "thread": "nvmf_tgt_poll_group_000", 00:16:46.530 "listen_address": { 00:16:46.530 "trtype": "TCP", 00:16:46.530 "adrfam": "IPv4", 00:16:46.530 "traddr": "10.0.0.2", 00:16:46.530 "trsvcid": "4420" 00:16:46.530 }, 00:16:46.530 "peer_address": { 00:16:46.530 "trtype": "TCP", 00:16:46.530 "adrfam": "IPv4", 00:16:46.530 "traddr": "10.0.0.1", 00:16:46.530 "trsvcid": "39590" 00:16:46.530 }, 00:16:46.530 "auth": { 00:16:46.530 "state": "completed", 00:16:46.530 "digest": "sha384", 00:16:46.530 "dhgroup": "null" 00:16:46.530 } 00:16:46.530 } 00:16:46.530 ]' 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.530 20:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.789 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:47.356 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.356 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.356 20:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.356 20:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.356 20:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.356 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.357 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.357 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.615 20:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.615 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.874 { 00:16:47.874 "cntlid": 53, 00:16:47.874 "qid": 0, 00:16:47.874 "state": "enabled", 00:16:47.874 "thread": "nvmf_tgt_poll_group_000", 00:16:47.874 "listen_address": { 00:16:47.874 "trtype": "TCP", 00:16:47.874 "adrfam": "IPv4", 00:16:47.874 "traddr": "10.0.0.2", 00:16:47.874 "trsvcid": "4420" 00:16:47.874 }, 00:16:47.874 "peer_address": { 00:16:47.874 "trtype": "TCP", 00:16:47.874 "adrfam": "IPv4", 00:16:47.874 "traddr": "10.0.0.1", 00:16:47.874 "trsvcid": "39604" 00:16:47.874 }, 00:16:47.874 "auth": { 00:16:47.874 "state": "completed", 00:16:47.874 "digest": "sha384", 00:16:47.874 "dhgroup": "null" 00:16:47.874 } 00:16:47.874 } 00:16:47.874 ]' 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.874 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.133 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:48.133 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.133 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.133 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.133 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.133 20:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:16:48.701 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.701 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.701 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.701 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.959 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.218 00:16:49.218 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.218 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.218 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.477 { 00:16:49.477 "cntlid": 55, 00:16:49.477 "qid": 0, 00:16:49.477 "state": "enabled", 00:16:49.477 "thread": "nvmf_tgt_poll_group_000", 00:16:49.477 "listen_address": { 00:16:49.477 "trtype": "TCP", 00:16:49.477 "adrfam": "IPv4", 00:16:49.477 "traddr": "10.0.0.2", 00:16:49.477 "trsvcid": "4420" 00:16:49.477 }, 00:16:49.477 "peer_address": { 00:16:49.477 "trtype": "TCP", 00:16:49.477 "adrfam": "IPv4", 00:16:49.477 "traddr": "10.0.0.1", 00:16:49.477 "trsvcid": "39636" 00:16:49.477 }, 00:16:49.477 "auth": { 00:16:49.477 "state": "completed", 00:16:49.477 "digest": "sha384", 00:16:49.477 "dhgroup": "null" 00:16:49.477 } 00:16:49.477 } 00:16:49.477 ]' 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.477 20:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.736 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.304 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.563 20:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.821 00:16:50.821 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.821 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.821 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.821 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.821 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.821 20:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.821 20:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.080 { 00:16:51.080 "cntlid": 57, 00:16:51.080 "qid": 0, 00:16:51.080 "state": "enabled", 00:16:51.080 "thread": "nvmf_tgt_poll_group_000", 00:16:51.080 "listen_address": { 00:16:51.080 "trtype": "TCP", 00:16:51.080 "adrfam": "IPv4", 00:16:51.080 "traddr": "10.0.0.2", 00:16:51.080 "trsvcid": "4420" 00:16:51.080 }, 00:16:51.080 "peer_address": { 00:16:51.080 "trtype": "TCP", 00:16:51.080 "adrfam": "IPv4", 00:16:51.080 "traddr": "10.0.0.1", 00:16:51.080 "trsvcid": "39680" 00:16:51.080 }, 00:16:51.080 "auth": { 00:16:51.080 "state": "completed", 00:16:51.080 "digest": "sha384", 00:16:51.080 "dhgroup": "ffdhe2048" 00:16:51.080 } 00:16:51.080 } 00:16:51.080 ]' 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.080 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.338 20:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.905 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.163 00:16:52.163 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.163 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.163 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.421 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.421 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.421 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.422 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.422 20:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.422 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.422 { 00:16:52.422 "cntlid": 59, 00:16:52.422 "qid": 0, 00:16:52.422 "state": "enabled", 00:16:52.422 "thread": "nvmf_tgt_poll_group_000", 00:16:52.422 "listen_address": { 00:16:52.422 "trtype": "TCP", 00:16:52.422 "adrfam": "IPv4", 00:16:52.422 "traddr": "10.0.0.2", 00:16:52.422 "trsvcid": "4420" 00:16:52.422 }, 00:16:52.422 "peer_address": { 00:16:52.422 "trtype": "TCP", 00:16:52.422 "adrfam": "IPv4", 00:16:52.422 "traddr": "10.0.0.1", 00:16:52.422 "trsvcid": "39722" 00:16:52.422 }, 00:16:52.422 "auth": { 00:16:52.422 "state": "completed", 00:16:52.422 "digest": "sha384", 00:16:52.422 "dhgroup": "ffdhe2048" 00:16:52.422 } 00:16:52.422 } 00:16:52.422 ]' 00:16:52.422 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.422 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.422 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.422 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.422 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.681 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.681 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.681 20:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.681 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:53.249 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.249 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.249 20:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.249 20:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.249 20:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.249 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.249 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.249 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.508 20:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.768 00:16:53.768 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.768 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.768 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.027 { 00:16:54.027 "cntlid": 61, 00:16:54.027 "qid": 0, 00:16:54.027 "state": "enabled", 00:16:54.027 "thread": "nvmf_tgt_poll_group_000", 00:16:54.027 "listen_address": { 00:16:54.027 "trtype": "TCP", 00:16:54.027 "adrfam": "IPv4", 00:16:54.027 "traddr": "10.0.0.2", 00:16:54.027 "trsvcid": "4420" 00:16:54.027 }, 00:16:54.027 "peer_address": { 00:16:54.027 "trtype": "TCP", 00:16:54.027 "adrfam": "IPv4", 00:16:54.027 "traddr": "10.0.0.1", 00:16:54.027 "trsvcid": "39034" 00:16:54.027 }, 00:16:54.027 "auth": { 00:16:54.027 "state": "completed", 00:16:54.027 "digest": "sha384", 00:16:54.027 "dhgroup": "ffdhe2048" 00:16:54.027 } 00:16:54.027 } 00:16:54.027 ]' 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.027 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.306 20:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.918 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.177 00:16:55.177 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.177 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.177 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.436 { 00:16:55.436 "cntlid": 63, 00:16:55.436 "qid": 0, 00:16:55.436 "state": "enabled", 00:16:55.436 "thread": "nvmf_tgt_poll_group_000", 00:16:55.436 "listen_address": { 00:16:55.436 "trtype": "TCP", 00:16:55.436 "adrfam": "IPv4", 00:16:55.436 "traddr": "10.0.0.2", 00:16:55.436 "trsvcid": "4420" 00:16:55.436 }, 00:16:55.436 "peer_address": { 00:16:55.436 "trtype": "TCP", 00:16:55.436 "adrfam": "IPv4", 00:16:55.436 "traddr": "10.0.0.1", 00:16:55.436 "trsvcid": "39062" 00:16:55.436 }, 00:16:55.436 "auth": { 00:16:55.436 "state": "completed", 00:16:55.436 "digest": "sha384", 00:16:55.436 "dhgroup": "ffdhe2048" 00:16:55.436 } 00:16:55.436 } 00:16:55.436 ]' 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.436 20:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.696 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:16:56.263 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.263 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.263 20:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.264 20:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.264 20:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.264 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.264 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.264 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.264 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.522 20:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.780 00:16:56.780 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.780 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.780 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.038 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.038 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.038 20:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.038 20:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.038 20:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.038 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.038 { 00:16:57.038 "cntlid": 65, 00:16:57.038 "qid": 0, 00:16:57.038 "state": "enabled", 00:16:57.038 "thread": "nvmf_tgt_poll_group_000", 00:16:57.039 "listen_address": { 00:16:57.039 "trtype": "TCP", 00:16:57.039 "adrfam": "IPv4", 00:16:57.039 "traddr": "10.0.0.2", 00:16:57.039 "trsvcid": "4420" 00:16:57.039 }, 00:16:57.039 "peer_address": { 00:16:57.039 "trtype": "TCP", 00:16:57.039 "adrfam": "IPv4", 00:16:57.039 "traddr": "10.0.0.1", 00:16:57.039 "trsvcid": "39080" 00:16:57.039 }, 00:16:57.039 "auth": { 00:16:57.039 "state": "completed", 00:16:57.039 "digest": "sha384", 00:16:57.039 "dhgroup": "ffdhe3072" 00:16:57.039 } 00:16:57.039 } 00:16:57.039 ]' 00:16:57.039 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.039 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.039 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.039 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.039 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.039 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.039 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.039 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.297 20:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.862 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.120 00:16:58.120 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.120 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.120 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.377 { 00:16:58.377 "cntlid": 67, 00:16:58.377 "qid": 0, 00:16:58.377 "state": "enabled", 00:16:58.377 "thread": "nvmf_tgt_poll_group_000", 00:16:58.377 "listen_address": { 00:16:58.377 "trtype": "TCP", 00:16:58.377 "adrfam": "IPv4", 00:16:58.377 "traddr": "10.0.0.2", 00:16:58.377 "trsvcid": "4420" 00:16:58.377 }, 00:16:58.377 "peer_address": { 00:16:58.377 "trtype": "TCP", 00:16:58.377 "adrfam": "IPv4", 00:16:58.377 "traddr": "10.0.0.1", 00:16:58.377 "trsvcid": "39122" 00:16:58.377 }, 00:16:58.377 "auth": { 00:16:58.377 "state": "completed", 00:16:58.377 "digest": "sha384", 00:16:58.377 "dhgroup": "ffdhe3072" 00:16:58.377 } 00:16:58.377 } 00:16:58.377 ]' 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.377 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.635 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.635 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.635 20:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.635 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:16:59.201 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.201 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.201 20:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.201 20:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.201 20:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.201 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.201 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.201 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.460 20:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.718 00:16:59.718 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.718 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.718 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.977 { 00:16:59.977 "cntlid": 69, 00:16:59.977 "qid": 0, 00:16:59.977 "state": "enabled", 00:16:59.977 "thread": "nvmf_tgt_poll_group_000", 00:16:59.977 "listen_address": { 00:16:59.977 "trtype": "TCP", 00:16:59.977 "adrfam": "IPv4", 00:16:59.977 "traddr": "10.0.0.2", 00:16:59.977 "trsvcid": "4420" 00:16:59.977 }, 00:16:59.977 "peer_address": { 00:16:59.977 "trtype": "TCP", 00:16:59.977 "adrfam": "IPv4", 00:16:59.977 "traddr": "10.0.0.1", 00:16:59.977 "trsvcid": "39134" 00:16:59.977 }, 00:16:59.977 "auth": { 00:16:59.977 "state": "completed", 00:16:59.977 "digest": "sha384", 00:16:59.977 "dhgroup": "ffdhe3072" 00:16:59.977 } 00:16:59.977 } 00:16:59.977 ]' 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.977 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.235 20:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:00.802 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.802 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.802 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.802 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.802 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.802 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.802 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.802 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.062 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.062 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.321 { 00:17:01.321 "cntlid": 71, 00:17:01.321 "qid": 0, 00:17:01.321 "state": "enabled", 00:17:01.321 "thread": "nvmf_tgt_poll_group_000", 00:17:01.321 "listen_address": { 00:17:01.321 "trtype": "TCP", 00:17:01.321 "adrfam": "IPv4", 00:17:01.321 "traddr": "10.0.0.2", 00:17:01.321 "trsvcid": "4420" 00:17:01.321 }, 00:17:01.321 "peer_address": { 00:17:01.321 "trtype": "TCP", 00:17:01.321 "adrfam": "IPv4", 00:17:01.321 "traddr": "10.0.0.1", 00:17:01.321 "trsvcid": "39160" 00:17:01.321 }, 00:17:01.321 "auth": { 00:17:01.321 "state": "completed", 00:17:01.321 "digest": "sha384", 00:17:01.321 "dhgroup": "ffdhe3072" 00:17:01.321 } 00:17:01.321 } 00:17:01.321 ]' 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.321 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.580 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.580 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.580 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.580 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.580 20:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.580 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.147 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.406 20:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.665 00:17:02.665 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.665 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.665 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.924 { 00:17:02.924 "cntlid": 73, 00:17:02.924 "qid": 0, 00:17:02.924 "state": "enabled", 00:17:02.924 "thread": "nvmf_tgt_poll_group_000", 00:17:02.924 "listen_address": { 00:17:02.924 "trtype": "TCP", 00:17:02.924 "adrfam": "IPv4", 00:17:02.924 "traddr": "10.0.0.2", 00:17:02.924 "trsvcid": "4420" 00:17:02.924 }, 00:17:02.924 "peer_address": { 00:17:02.924 "trtype": "TCP", 00:17:02.924 "adrfam": "IPv4", 00:17:02.924 "traddr": "10.0.0.1", 00:17:02.924 "trsvcid": "60166" 00:17:02.924 }, 00:17:02.924 "auth": { 00:17:02.924 "state": "completed", 00:17:02.924 "digest": "sha384", 00:17:02.924 "dhgroup": "ffdhe4096" 00:17:02.924 } 00:17:02.924 } 00:17:02.924 ]' 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.924 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.183 20:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:03.750 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.750 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.750 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.750 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.750 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.750 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.750 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:03.750 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.008 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.267 00:17:04.267 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.267 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.267 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.267 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.267 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.267 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.267 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.525 { 00:17:04.525 "cntlid": 75, 00:17:04.525 "qid": 0, 00:17:04.525 "state": "enabled", 00:17:04.525 "thread": "nvmf_tgt_poll_group_000", 00:17:04.525 "listen_address": { 00:17:04.525 "trtype": "TCP", 00:17:04.525 "adrfam": "IPv4", 00:17:04.525 "traddr": "10.0.0.2", 00:17:04.525 "trsvcid": "4420" 00:17:04.525 }, 00:17:04.525 "peer_address": { 00:17:04.525 "trtype": "TCP", 00:17:04.525 "adrfam": "IPv4", 00:17:04.525 "traddr": "10.0.0.1", 00:17:04.525 "trsvcid": "60204" 00:17:04.525 }, 00:17:04.525 "auth": { 00:17:04.525 "state": "completed", 00:17:04.525 "digest": "sha384", 00:17:04.525 "dhgroup": "ffdhe4096" 00:17:04.525 } 00:17:04.525 } 00:17:04.525 ]' 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.525 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.526 20:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.785 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:05.353 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.354 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.354 20:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.354 20:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.354 20:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.354 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.354 20:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.612 00:17:05.612 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.612 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.612 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.871 { 00:17:05.871 "cntlid": 77, 00:17:05.871 "qid": 0, 00:17:05.871 "state": "enabled", 00:17:05.871 "thread": "nvmf_tgt_poll_group_000", 00:17:05.871 "listen_address": { 00:17:05.871 "trtype": "TCP", 00:17:05.871 "adrfam": "IPv4", 00:17:05.871 "traddr": "10.0.0.2", 00:17:05.871 "trsvcid": "4420" 00:17:05.871 }, 00:17:05.871 "peer_address": { 00:17:05.871 "trtype": "TCP", 00:17:05.871 "adrfam": "IPv4", 00:17:05.871 "traddr": "10.0.0.1", 00:17:05.871 "trsvcid": "60236" 00:17:05.871 }, 00:17:05.871 "auth": { 00:17:05.871 "state": "completed", 00:17:05.871 "digest": "sha384", 00:17:05.871 "dhgroup": "ffdhe4096" 00:17:05.871 } 00:17:05.871 } 00:17:05.871 ]' 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.871 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.130 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.130 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.130 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.130 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.130 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.130 20:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:06.697 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.697 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.697 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.697 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.697 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.697 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.697 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.697 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.957 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.251 00:17:07.251 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.251 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.251 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.535 { 00:17:07.535 "cntlid": 79, 00:17:07.535 "qid": 0, 00:17:07.535 "state": "enabled", 00:17:07.535 "thread": "nvmf_tgt_poll_group_000", 00:17:07.535 "listen_address": { 00:17:07.535 "trtype": "TCP", 00:17:07.535 "adrfam": "IPv4", 00:17:07.535 "traddr": "10.0.0.2", 00:17:07.535 "trsvcid": "4420" 00:17:07.535 }, 00:17:07.535 "peer_address": { 00:17:07.535 "trtype": "TCP", 00:17:07.535 "adrfam": "IPv4", 00:17:07.535 "traddr": "10.0.0.1", 00:17:07.535 "trsvcid": "60258" 00:17:07.535 }, 00:17:07.535 "auth": { 00:17:07.535 "state": "completed", 00:17:07.535 "digest": "sha384", 00:17:07.535 "dhgroup": "ffdhe4096" 00:17:07.535 } 00:17:07.535 } 00:17:07.535 ]' 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.535 20:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.793 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.360 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.619 20:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.878 00:17:08.878 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.878 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.878 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.137 { 00:17:09.137 "cntlid": 81, 00:17:09.137 "qid": 0, 00:17:09.137 "state": "enabled", 00:17:09.137 "thread": "nvmf_tgt_poll_group_000", 00:17:09.137 "listen_address": { 00:17:09.137 "trtype": "TCP", 00:17:09.137 "adrfam": "IPv4", 00:17:09.137 "traddr": "10.0.0.2", 00:17:09.137 "trsvcid": "4420" 00:17:09.137 }, 00:17:09.137 "peer_address": { 00:17:09.137 "trtype": "TCP", 00:17:09.137 "adrfam": "IPv4", 00:17:09.137 "traddr": "10.0.0.1", 00:17:09.137 "trsvcid": "60284" 00:17:09.137 }, 00:17:09.137 "auth": { 00:17:09.137 "state": "completed", 00:17:09.137 "digest": "sha384", 00:17:09.137 "dhgroup": "ffdhe6144" 00:17:09.137 } 00:17:09.137 } 00:17:09.137 ]' 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.137 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.395 20:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:09.963 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.963 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.963 20:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.963 20:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.963 20:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.963 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.963 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:09.963 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.222 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.481 00:17:10.481 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.481 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.481 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.739 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.739 20:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.739 20:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.739 20:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.739 20:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.739 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.739 { 00:17:10.739 "cntlid": 83, 00:17:10.739 "qid": 0, 00:17:10.739 "state": "enabled", 00:17:10.739 "thread": "nvmf_tgt_poll_group_000", 00:17:10.739 "listen_address": { 00:17:10.739 "trtype": "TCP", 00:17:10.739 "adrfam": "IPv4", 00:17:10.739 "traddr": "10.0.0.2", 00:17:10.739 "trsvcid": "4420" 00:17:10.739 }, 00:17:10.739 "peer_address": { 00:17:10.739 "trtype": "TCP", 00:17:10.739 "adrfam": "IPv4", 00:17:10.739 "traddr": "10.0.0.1", 00:17:10.739 "trsvcid": "60314" 00:17:10.739 }, 00:17:10.739 "auth": { 00:17:10.740 "state": "completed", 00:17:10.740 "digest": "sha384", 00:17:10.740 "dhgroup": "ffdhe6144" 00:17:10.740 } 00:17:10.740 } 00:17:10.740 ]' 00:17:10.740 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.740 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.740 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.740 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.740 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.740 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.740 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.740 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.998 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:11.566 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.566 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.566 20:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.566 20:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.566 20:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.566 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.566 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.566 20:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.566 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.134 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.134 { 00:17:12.134 "cntlid": 85, 00:17:12.134 "qid": 0, 00:17:12.134 "state": "enabled", 00:17:12.134 "thread": "nvmf_tgt_poll_group_000", 00:17:12.134 "listen_address": { 00:17:12.134 "trtype": "TCP", 00:17:12.134 "adrfam": "IPv4", 00:17:12.134 "traddr": "10.0.0.2", 00:17:12.134 "trsvcid": "4420" 00:17:12.134 }, 00:17:12.134 "peer_address": { 00:17:12.134 "trtype": "TCP", 00:17:12.134 "adrfam": "IPv4", 00:17:12.134 "traddr": "10.0.0.1", 00:17:12.134 "trsvcid": "60334" 00:17:12.134 }, 00:17:12.134 "auth": { 00:17:12.134 "state": "completed", 00:17:12.134 "digest": "sha384", 00:17:12.134 "dhgroup": "ffdhe6144" 00:17:12.134 } 00:17:12.134 } 00:17:12.134 ]' 00:17:12.134 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.393 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.393 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.393 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.393 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.393 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.393 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.393 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.652 20:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.219 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.479 00:17:13.737 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.737 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.737 20:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.737 { 00:17:13.737 "cntlid": 87, 00:17:13.737 "qid": 0, 00:17:13.737 "state": "enabled", 00:17:13.737 "thread": "nvmf_tgt_poll_group_000", 00:17:13.737 "listen_address": { 00:17:13.737 "trtype": "TCP", 00:17:13.737 "adrfam": "IPv4", 00:17:13.737 "traddr": "10.0.0.2", 00:17:13.737 "trsvcid": "4420" 00:17:13.737 }, 00:17:13.737 "peer_address": { 00:17:13.737 "trtype": "TCP", 00:17:13.737 "adrfam": "IPv4", 00:17:13.737 "traddr": "10.0.0.1", 00:17:13.737 "trsvcid": "40070" 00:17:13.737 }, 00:17:13.737 "auth": { 00:17:13.737 "state": "completed", 00:17:13.737 "digest": "sha384", 00:17:13.737 "dhgroup": "ffdhe6144" 00:17:13.737 } 00:17:13.737 } 00:17:13.737 ]' 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.737 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.995 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.995 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.995 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.995 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.995 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.996 20:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.563 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.822 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.391 00:17:15.391 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.391 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.391 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.391 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.391 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.391 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.391 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.648 { 00:17:15.648 "cntlid": 89, 00:17:15.648 "qid": 0, 00:17:15.648 "state": "enabled", 00:17:15.648 "thread": "nvmf_tgt_poll_group_000", 00:17:15.648 "listen_address": { 00:17:15.648 "trtype": "TCP", 00:17:15.648 "adrfam": "IPv4", 00:17:15.648 "traddr": "10.0.0.2", 00:17:15.648 "trsvcid": "4420" 00:17:15.648 }, 00:17:15.648 "peer_address": { 00:17:15.648 "trtype": "TCP", 00:17:15.648 "adrfam": "IPv4", 00:17:15.648 "traddr": "10.0.0.1", 00:17:15.648 "trsvcid": "40098" 00:17:15.648 }, 00:17:15.648 "auth": { 00:17:15.648 "state": "completed", 00:17:15.648 "digest": "sha384", 00:17:15.648 "dhgroup": "ffdhe8192" 00:17:15.648 } 00:17:15.648 } 00:17:15.648 ]' 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.648 20:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.904 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:16.471 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.471 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.471 20:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.471 20:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.471 20:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.471 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.472 20:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.039 00:17:17.039 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.039 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.039 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.298 { 00:17:17.298 "cntlid": 91, 00:17:17.298 "qid": 0, 00:17:17.298 "state": "enabled", 00:17:17.298 "thread": "nvmf_tgt_poll_group_000", 00:17:17.298 "listen_address": { 00:17:17.298 "trtype": "TCP", 00:17:17.298 "adrfam": "IPv4", 00:17:17.298 "traddr": "10.0.0.2", 00:17:17.298 "trsvcid": "4420" 00:17:17.298 }, 00:17:17.298 "peer_address": { 00:17:17.298 "trtype": "TCP", 00:17:17.298 "adrfam": "IPv4", 00:17:17.298 "traddr": "10.0.0.1", 00:17:17.298 "trsvcid": "40134" 00:17:17.298 }, 00:17:17.298 "auth": { 00:17:17.298 "state": "completed", 00:17:17.298 "digest": "sha384", 00:17:17.298 "dhgroup": "ffdhe8192" 00:17:17.298 } 00:17:17.298 } 00:17:17.298 ]' 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.298 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.556 20:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:18.123 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.123 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.123 20:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.123 20:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.123 20:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.123 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.123 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.123 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.382 20:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.641 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.901 { 00:17:18.901 "cntlid": 93, 00:17:18.901 "qid": 0, 00:17:18.901 "state": "enabled", 00:17:18.901 "thread": "nvmf_tgt_poll_group_000", 00:17:18.901 "listen_address": { 00:17:18.901 "trtype": "TCP", 00:17:18.901 "adrfam": "IPv4", 00:17:18.901 "traddr": "10.0.0.2", 00:17:18.901 "trsvcid": "4420" 00:17:18.901 }, 00:17:18.901 "peer_address": { 00:17:18.901 "trtype": "TCP", 00:17:18.901 "adrfam": "IPv4", 00:17:18.901 "traddr": "10.0.0.1", 00:17:18.901 "trsvcid": "40148" 00:17:18.901 }, 00:17:18.901 "auth": { 00:17:18.901 "state": "completed", 00:17:18.901 "digest": "sha384", 00:17:18.901 "dhgroup": "ffdhe8192" 00:17:18.901 } 00:17:18.901 } 00:17:18.901 ]' 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.901 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.161 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.161 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.161 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.161 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.161 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.161 20:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:19.727 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.727 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.727 20:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.727 20:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.986 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.553 00:17:20.553 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.553 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.553 20:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.825 { 00:17:20.825 "cntlid": 95, 00:17:20.825 "qid": 0, 00:17:20.825 "state": "enabled", 00:17:20.825 "thread": "nvmf_tgt_poll_group_000", 00:17:20.825 "listen_address": { 00:17:20.825 "trtype": "TCP", 00:17:20.825 "adrfam": "IPv4", 00:17:20.825 "traddr": "10.0.0.2", 00:17:20.825 "trsvcid": "4420" 00:17:20.825 }, 00:17:20.825 "peer_address": { 00:17:20.825 "trtype": "TCP", 00:17:20.825 "adrfam": "IPv4", 00:17:20.825 "traddr": "10.0.0.1", 00:17:20.825 "trsvcid": "40180" 00:17:20.825 }, 00:17:20.825 "auth": { 00:17:20.825 "state": "completed", 00:17:20.825 "digest": "sha384", 00:17:20.825 "dhgroup": "ffdhe8192" 00:17:20.825 } 00:17:20.825 } 00:17:20.825 ]' 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.825 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.084 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:21.652 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.652 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.652 20:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.652 20:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.653 20:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.653 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:21.653 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.653 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.653 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.653 20:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.653 20:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.912 20:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.912 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.912 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.912 00:17:21.912 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.912 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.912 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.171 { 00:17:22.171 "cntlid": 97, 00:17:22.171 "qid": 0, 00:17:22.171 "state": "enabled", 00:17:22.171 "thread": "nvmf_tgt_poll_group_000", 00:17:22.171 "listen_address": { 00:17:22.171 "trtype": "TCP", 00:17:22.171 "adrfam": "IPv4", 00:17:22.171 "traddr": "10.0.0.2", 00:17:22.171 "trsvcid": "4420" 00:17:22.171 }, 00:17:22.171 "peer_address": { 00:17:22.171 "trtype": "TCP", 00:17:22.171 "adrfam": "IPv4", 00:17:22.171 "traddr": "10.0.0.1", 00:17:22.171 "trsvcid": "40208" 00:17:22.171 }, 00:17:22.171 "auth": { 00:17:22.171 "state": "completed", 00:17:22.171 "digest": "sha512", 00:17:22.171 "dhgroup": "null" 00:17:22.171 } 00:17:22.171 } 00:17:22.171 ]' 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:22.171 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.430 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.430 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.430 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.430 20:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:22.995 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.995 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.995 20:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.995 20:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.996 20:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.996 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.996 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:22.996 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.254 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.514 00:17:23.514 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.514 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.514 20:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.772 { 00:17:23.772 "cntlid": 99, 00:17:23.772 "qid": 0, 00:17:23.772 "state": "enabled", 00:17:23.772 "thread": "nvmf_tgt_poll_group_000", 00:17:23.772 "listen_address": { 00:17:23.772 "trtype": "TCP", 00:17:23.772 "adrfam": "IPv4", 00:17:23.772 "traddr": "10.0.0.2", 00:17:23.772 "trsvcid": "4420" 00:17:23.772 }, 00:17:23.772 "peer_address": { 00:17:23.772 "trtype": "TCP", 00:17:23.772 "adrfam": "IPv4", 00:17:23.772 "traddr": "10.0.0.1", 00:17:23.772 "trsvcid": "41892" 00:17:23.772 }, 00:17:23.772 "auth": { 00:17:23.772 "state": "completed", 00:17:23.772 "digest": "sha512", 00:17:23.772 "dhgroup": "null" 00:17:23.772 } 00:17:23.772 } 00:17:23.772 ]' 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.772 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.031 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:24.599 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.599 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.599 20:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.599 20:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.599 20:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.599 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.599 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:24.599 20:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.858 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.858 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.116 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.116 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.116 20:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.116 20:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.116 20:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.116 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.116 { 00:17:25.116 "cntlid": 101, 00:17:25.116 "qid": 0, 00:17:25.116 "state": "enabled", 00:17:25.116 "thread": "nvmf_tgt_poll_group_000", 00:17:25.116 "listen_address": { 00:17:25.116 "trtype": "TCP", 00:17:25.116 "adrfam": "IPv4", 00:17:25.116 "traddr": "10.0.0.2", 00:17:25.116 "trsvcid": "4420" 00:17:25.116 }, 00:17:25.116 "peer_address": { 00:17:25.116 "trtype": "TCP", 00:17:25.116 "adrfam": "IPv4", 00:17:25.116 "traddr": "10.0.0.1", 00:17:25.116 "trsvcid": "41922" 00:17:25.116 }, 00:17:25.116 "auth": { 00:17:25.116 "state": "completed", 00:17:25.116 "digest": "sha512", 00:17:25.116 "dhgroup": "null" 00:17:25.116 } 00:17:25.116 } 00:17:25.117 ]' 00:17:25.117 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.117 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.117 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.117 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:25.375 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.375 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.375 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.375 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.375 20:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:25.997 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.997 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.997 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.997 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.997 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.997 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.997 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.997 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.256 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.514 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.514 { 00:17:26.514 "cntlid": 103, 00:17:26.514 "qid": 0, 00:17:26.514 "state": "enabled", 00:17:26.514 "thread": "nvmf_tgt_poll_group_000", 00:17:26.514 "listen_address": { 00:17:26.514 "trtype": "TCP", 00:17:26.514 "adrfam": "IPv4", 00:17:26.514 "traddr": "10.0.0.2", 00:17:26.514 "trsvcid": "4420" 00:17:26.514 }, 00:17:26.514 "peer_address": { 00:17:26.514 "trtype": "TCP", 00:17:26.514 "adrfam": "IPv4", 00:17:26.514 "traddr": "10.0.0.1", 00:17:26.514 "trsvcid": "41954" 00:17:26.514 }, 00:17:26.514 "auth": { 00:17:26.514 "state": "completed", 00:17:26.514 "digest": "sha512", 00:17:26.514 "dhgroup": "null" 00:17:26.514 } 00:17:26.514 } 00:17:26.514 ]' 00:17:26.514 20:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.773 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.773 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.773 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:26.773 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.773 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.773 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.773 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.031 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:27.596 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.597 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.597 20:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.597 20:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.597 20:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.597 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.597 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.597 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.597 20:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.597 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.855 00:17:27.855 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.855 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.855 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.113 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.113 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.113 20:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.113 20:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.113 20:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.113 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.113 { 00:17:28.113 "cntlid": 105, 00:17:28.113 "qid": 0, 00:17:28.113 "state": "enabled", 00:17:28.113 "thread": "nvmf_tgt_poll_group_000", 00:17:28.113 "listen_address": { 00:17:28.113 "trtype": "TCP", 00:17:28.113 "adrfam": "IPv4", 00:17:28.113 "traddr": "10.0.0.2", 00:17:28.113 "trsvcid": "4420" 00:17:28.113 }, 00:17:28.113 "peer_address": { 00:17:28.113 "trtype": "TCP", 00:17:28.113 "adrfam": "IPv4", 00:17:28.113 "traddr": "10.0.0.1", 00:17:28.113 "trsvcid": "41988" 00:17:28.113 }, 00:17:28.114 "auth": { 00:17:28.114 "state": "completed", 00:17:28.114 "digest": "sha512", 00:17:28.114 "dhgroup": "ffdhe2048" 00:17:28.114 } 00:17:28.114 } 00:17:28.114 ]' 00:17:28.114 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.114 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.114 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.114 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.114 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.114 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.114 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.114 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.372 20:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:28.939 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.940 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.940 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.940 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.940 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.940 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.940 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.198 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.457 00:17:29.457 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.457 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.457 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.457 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.457 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.457 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.457 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.715 20:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.715 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.715 { 00:17:29.715 "cntlid": 107, 00:17:29.715 "qid": 0, 00:17:29.715 "state": "enabled", 00:17:29.715 "thread": "nvmf_tgt_poll_group_000", 00:17:29.715 "listen_address": { 00:17:29.715 "trtype": "TCP", 00:17:29.715 "adrfam": "IPv4", 00:17:29.715 "traddr": "10.0.0.2", 00:17:29.715 "trsvcid": "4420" 00:17:29.715 }, 00:17:29.715 "peer_address": { 00:17:29.715 "trtype": "TCP", 00:17:29.715 "adrfam": "IPv4", 00:17:29.715 "traddr": "10.0.0.1", 00:17:29.715 "trsvcid": "42008" 00:17:29.715 }, 00:17:29.715 "auth": { 00:17:29.715 "state": "completed", 00:17:29.715 "digest": "sha512", 00:17:29.715 "dhgroup": "ffdhe2048" 00:17:29.715 } 00:17:29.715 } 00:17:29.715 ]' 00:17:29.715 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.715 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.715 20:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.715 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.715 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.715 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.715 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.715 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.973 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.540 20:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.799 00:17:30.799 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.799 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.799 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.057 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.057 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.057 20:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.058 { 00:17:31.058 "cntlid": 109, 00:17:31.058 "qid": 0, 00:17:31.058 "state": "enabled", 00:17:31.058 "thread": "nvmf_tgt_poll_group_000", 00:17:31.058 "listen_address": { 00:17:31.058 "trtype": "TCP", 00:17:31.058 "adrfam": "IPv4", 00:17:31.058 "traddr": "10.0.0.2", 00:17:31.058 "trsvcid": "4420" 00:17:31.058 }, 00:17:31.058 "peer_address": { 00:17:31.058 "trtype": "TCP", 00:17:31.058 "adrfam": "IPv4", 00:17:31.058 "traddr": "10.0.0.1", 00:17:31.058 "trsvcid": "42048" 00:17:31.058 }, 00:17:31.058 "auth": { 00:17:31.058 "state": "completed", 00:17:31.058 "digest": "sha512", 00:17:31.058 "dhgroup": "ffdhe2048" 00:17:31.058 } 00:17:31.058 } 00:17:31.058 ]' 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.058 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.317 20:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:31.884 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.884 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.884 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.884 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.884 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.884 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.884 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.884 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.143 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.401 00:17:32.401 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.401 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.401 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.401 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.401 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.401 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.402 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.660 { 00:17:32.660 "cntlid": 111, 00:17:32.660 "qid": 0, 00:17:32.660 "state": "enabled", 00:17:32.660 "thread": "nvmf_tgt_poll_group_000", 00:17:32.660 "listen_address": { 00:17:32.660 "trtype": "TCP", 00:17:32.660 "adrfam": "IPv4", 00:17:32.660 "traddr": "10.0.0.2", 00:17:32.660 "trsvcid": "4420" 00:17:32.660 }, 00:17:32.660 "peer_address": { 00:17:32.660 "trtype": "TCP", 00:17:32.660 "adrfam": "IPv4", 00:17:32.660 "traddr": "10.0.0.1", 00:17:32.660 "trsvcid": "42088" 00:17:32.660 }, 00:17:32.660 "auth": { 00:17:32.660 "state": "completed", 00:17:32.660 "digest": "sha512", 00:17:32.660 "dhgroup": "ffdhe2048" 00:17:32.660 } 00:17:32.660 } 00:17:32.660 ]' 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.660 20:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.919 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.485 20:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.743 00:17:33.743 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.743 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.743 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.001 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.001 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.001 20:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.001 20:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.001 20:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.002 { 00:17:34.002 "cntlid": 113, 00:17:34.002 "qid": 0, 00:17:34.002 "state": "enabled", 00:17:34.002 "thread": "nvmf_tgt_poll_group_000", 00:17:34.002 "listen_address": { 00:17:34.002 "trtype": "TCP", 00:17:34.002 "adrfam": "IPv4", 00:17:34.002 "traddr": "10.0.0.2", 00:17:34.002 "trsvcid": "4420" 00:17:34.002 }, 00:17:34.002 "peer_address": { 00:17:34.002 "trtype": "TCP", 00:17:34.002 "adrfam": "IPv4", 00:17:34.002 "traddr": "10.0.0.1", 00:17:34.002 "trsvcid": "48238" 00:17:34.002 }, 00:17:34.002 "auth": { 00:17:34.002 "state": "completed", 00:17:34.002 "digest": "sha512", 00:17:34.002 "dhgroup": "ffdhe3072" 00:17:34.002 } 00:17:34.002 } 00:17:34.002 ]' 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.002 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.260 20:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:34.827 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.827 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.827 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.827 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.827 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.827 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.827 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:34.827 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.086 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.345 00:17:35.345 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.345 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.345 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.604 { 00:17:35.604 "cntlid": 115, 00:17:35.604 "qid": 0, 00:17:35.604 "state": "enabled", 00:17:35.604 "thread": "nvmf_tgt_poll_group_000", 00:17:35.604 "listen_address": { 00:17:35.604 "trtype": "TCP", 00:17:35.604 "adrfam": "IPv4", 00:17:35.604 "traddr": "10.0.0.2", 00:17:35.604 "trsvcid": "4420" 00:17:35.604 }, 00:17:35.604 "peer_address": { 00:17:35.604 "trtype": "TCP", 00:17:35.604 "adrfam": "IPv4", 00:17:35.604 "traddr": "10.0.0.1", 00:17:35.604 "trsvcid": "48268" 00:17:35.604 }, 00:17:35.604 "auth": { 00:17:35.604 "state": "completed", 00:17:35.604 "digest": "sha512", 00:17:35.604 "dhgroup": "ffdhe3072" 00:17:35.604 } 00:17:35.604 } 00:17:35.604 ]' 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.604 20:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.863 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:36.431 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.431 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.431 20:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.431 20:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.431 20:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.431 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.431 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.431 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.690 20:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.690 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.949 { 00:17:36.949 "cntlid": 117, 00:17:36.949 "qid": 0, 00:17:36.949 "state": "enabled", 00:17:36.949 "thread": "nvmf_tgt_poll_group_000", 00:17:36.949 "listen_address": { 00:17:36.949 "trtype": "TCP", 00:17:36.949 "adrfam": "IPv4", 00:17:36.949 "traddr": "10.0.0.2", 00:17:36.949 "trsvcid": "4420" 00:17:36.949 }, 00:17:36.949 "peer_address": { 00:17:36.949 "trtype": "TCP", 00:17:36.949 "adrfam": "IPv4", 00:17:36.949 "traddr": "10.0.0.1", 00:17:36.949 "trsvcid": "48288" 00:17:36.949 }, 00:17:36.949 "auth": { 00:17:36.949 "state": "completed", 00:17:36.949 "digest": "sha512", 00:17:36.949 "dhgroup": "ffdhe3072" 00:17:36.949 } 00:17:36.949 } 00:17:36.949 ]' 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.949 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.208 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.208 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.208 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.208 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.208 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.208 20:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:37.775 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.775 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.775 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.775 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.035 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.294 00:17:38.294 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.294 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.294 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.553 { 00:17:38.553 "cntlid": 119, 00:17:38.553 "qid": 0, 00:17:38.553 "state": "enabled", 00:17:38.553 "thread": "nvmf_tgt_poll_group_000", 00:17:38.553 "listen_address": { 00:17:38.553 "trtype": "TCP", 00:17:38.553 "adrfam": "IPv4", 00:17:38.553 "traddr": "10.0.0.2", 00:17:38.553 "trsvcid": "4420" 00:17:38.553 }, 00:17:38.553 "peer_address": { 00:17:38.553 "trtype": "TCP", 00:17:38.553 "adrfam": "IPv4", 00:17:38.553 "traddr": "10.0.0.1", 00:17:38.553 "trsvcid": "48314" 00:17:38.553 }, 00:17:38.553 "auth": { 00:17:38.553 "state": "completed", 00:17:38.553 "digest": "sha512", 00:17:38.553 "dhgroup": "ffdhe3072" 00:17:38.553 } 00:17:38.553 } 00:17:38.553 ]' 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.553 20:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.553 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.553 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.553 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.812 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:39.379 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.379 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.379 20:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.379 20:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.379 20:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.380 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.380 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.380 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.380 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.639 20:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.897 00:17:39.897 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.897 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.897 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.897 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.897 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.897 20:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.897 20:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.156 { 00:17:40.156 "cntlid": 121, 00:17:40.156 "qid": 0, 00:17:40.156 "state": "enabled", 00:17:40.156 "thread": "nvmf_tgt_poll_group_000", 00:17:40.156 "listen_address": { 00:17:40.156 "trtype": "TCP", 00:17:40.156 "adrfam": "IPv4", 00:17:40.156 "traddr": "10.0.0.2", 00:17:40.156 "trsvcid": "4420" 00:17:40.156 }, 00:17:40.156 "peer_address": { 00:17:40.156 "trtype": "TCP", 00:17:40.156 "adrfam": "IPv4", 00:17:40.156 "traddr": "10.0.0.1", 00:17:40.156 "trsvcid": "48334" 00:17:40.156 }, 00:17:40.156 "auth": { 00:17:40.156 "state": "completed", 00:17:40.156 "digest": "sha512", 00:17:40.156 "dhgroup": "ffdhe4096" 00:17:40.156 } 00:17:40.156 } 00:17:40.156 ]' 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.156 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.415 20:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.982 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.241 00:17:41.241 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.241 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.241 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.500 { 00:17:41.500 "cntlid": 123, 00:17:41.500 "qid": 0, 00:17:41.500 "state": "enabled", 00:17:41.500 "thread": "nvmf_tgt_poll_group_000", 00:17:41.500 "listen_address": { 00:17:41.500 "trtype": "TCP", 00:17:41.500 "adrfam": "IPv4", 00:17:41.500 "traddr": "10.0.0.2", 00:17:41.500 "trsvcid": "4420" 00:17:41.500 }, 00:17:41.500 "peer_address": { 00:17:41.500 "trtype": "TCP", 00:17:41.500 "adrfam": "IPv4", 00:17:41.500 "traddr": "10.0.0.1", 00:17:41.500 "trsvcid": "48350" 00:17:41.500 }, 00:17:41.500 "auth": { 00:17:41.500 "state": "completed", 00:17:41.500 "digest": "sha512", 00:17:41.500 "dhgroup": "ffdhe4096" 00:17:41.500 } 00:17:41.500 } 00:17:41.500 ]' 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.500 20:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.802 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.802 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.802 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.802 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:42.370 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.370 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.370 20:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.370 20:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.370 20:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.370 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.370 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.370 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.628 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:42.628 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.628 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.628 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:42.628 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.628 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.628 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.629 20:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.629 20:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.629 20:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.629 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.629 20:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.886 00:17:42.886 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.886 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.886 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.145 { 00:17:43.145 "cntlid": 125, 00:17:43.145 "qid": 0, 00:17:43.145 "state": "enabled", 00:17:43.145 "thread": "nvmf_tgt_poll_group_000", 00:17:43.145 "listen_address": { 00:17:43.145 "trtype": "TCP", 00:17:43.145 "adrfam": "IPv4", 00:17:43.145 "traddr": "10.0.0.2", 00:17:43.145 "trsvcid": "4420" 00:17:43.145 }, 00:17:43.145 "peer_address": { 00:17:43.145 "trtype": "TCP", 00:17:43.145 "adrfam": "IPv4", 00:17:43.145 "traddr": "10.0.0.1", 00:17:43.145 "trsvcid": "58332" 00:17:43.145 }, 00:17:43.145 "auth": { 00:17:43.145 "state": "completed", 00:17:43.145 "digest": "sha512", 00:17:43.145 "dhgroup": "ffdhe4096" 00:17:43.145 } 00:17:43.145 } 00:17:43.145 ]' 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.145 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.403 20:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.971 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.229 00:17:44.229 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.229 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.229 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.487 { 00:17:44.487 "cntlid": 127, 00:17:44.487 "qid": 0, 00:17:44.487 "state": "enabled", 00:17:44.487 "thread": "nvmf_tgt_poll_group_000", 00:17:44.487 "listen_address": { 00:17:44.487 "trtype": "TCP", 00:17:44.487 "adrfam": "IPv4", 00:17:44.487 "traddr": "10.0.0.2", 00:17:44.487 "trsvcid": "4420" 00:17:44.487 }, 00:17:44.487 "peer_address": { 00:17:44.487 "trtype": "TCP", 00:17:44.487 "adrfam": "IPv4", 00:17:44.487 "traddr": "10.0.0.1", 00:17:44.487 "trsvcid": "58350" 00:17:44.487 }, 00:17:44.487 "auth": { 00:17:44.487 "state": "completed", 00:17:44.487 "digest": "sha512", 00:17:44.487 "dhgroup": "ffdhe4096" 00:17:44.487 } 00:17:44.487 } 00:17:44.487 ]' 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.487 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.745 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.745 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.745 20:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.745 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:45.310 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:45.568 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:45.568 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.568 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.568 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:45.568 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:45.568 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.569 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.569 20:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.569 20:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.569 20:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.569 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.569 20:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.827 00:17:45.827 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.827 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.827 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.085 { 00:17:46.085 "cntlid": 129, 00:17:46.085 "qid": 0, 00:17:46.085 "state": "enabled", 00:17:46.085 "thread": "nvmf_tgt_poll_group_000", 00:17:46.085 "listen_address": { 00:17:46.085 "trtype": "TCP", 00:17:46.085 "adrfam": "IPv4", 00:17:46.085 "traddr": "10.0.0.2", 00:17:46.085 "trsvcid": "4420" 00:17:46.085 }, 00:17:46.085 "peer_address": { 00:17:46.085 "trtype": "TCP", 00:17:46.085 "adrfam": "IPv4", 00:17:46.085 "traddr": "10.0.0.1", 00:17:46.085 "trsvcid": "58364" 00:17:46.085 }, 00:17:46.085 "auth": { 00:17:46.085 "state": "completed", 00:17:46.085 "digest": "sha512", 00:17:46.085 "dhgroup": "ffdhe6144" 00:17:46.085 } 00:17:46.085 } 00:17:46.085 ]' 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.085 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.343 20:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:46.910 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.910 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.910 20:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.910 20:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.910 20:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.910 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.910 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:46.910 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.169 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.428 00:17:47.428 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.428 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.428 20:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.688 { 00:17:47.688 "cntlid": 131, 00:17:47.688 "qid": 0, 00:17:47.688 "state": "enabled", 00:17:47.688 "thread": "nvmf_tgt_poll_group_000", 00:17:47.688 "listen_address": { 00:17:47.688 "trtype": "TCP", 00:17:47.688 "adrfam": "IPv4", 00:17:47.688 "traddr": "10.0.0.2", 00:17:47.688 "trsvcid": "4420" 00:17:47.688 }, 00:17:47.688 "peer_address": { 00:17:47.688 "trtype": "TCP", 00:17:47.688 "adrfam": "IPv4", 00:17:47.688 "traddr": "10.0.0.1", 00:17:47.688 "trsvcid": "58386" 00:17:47.688 }, 00:17:47.688 "auth": { 00:17:47.688 "state": "completed", 00:17:47.688 "digest": "sha512", 00:17:47.688 "dhgroup": "ffdhe6144" 00:17:47.688 } 00:17:47.688 } 00:17:47.688 ]' 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.688 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.949 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:48.516 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.516 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.516 20:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.516 20:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.516 20:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.516 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.516 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.516 20:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.775 20:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.776 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.776 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.034 00:17:49.034 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.034 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.034 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.292 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.292 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.292 20:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.292 20:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.292 20:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.292 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.292 { 00:17:49.292 "cntlid": 133, 00:17:49.292 "qid": 0, 00:17:49.292 "state": "enabled", 00:17:49.292 "thread": "nvmf_tgt_poll_group_000", 00:17:49.292 "listen_address": { 00:17:49.292 "trtype": "TCP", 00:17:49.292 "adrfam": "IPv4", 00:17:49.292 "traddr": "10.0.0.2", 00:17:49.292 "trsvcid": "4420" 00:17:49.292 }, 00:17:49.292 "peer_address": { 00:17:49.292 "trtype": "TCP", 00:17:49.292 "adrfam": "IPv4", 00:17:49.292 "traddr": "10.0.0.1", 00:17:49.292 "trsvcid": "58420" 00:17:49.292 }, 00:17:49.292 "auth": { 00:17:49.292 "state": "completed", 00:17:49.292 "digest": "sha512", 00:17:49.292 "dhgroup": "ffdhe6144" 00:17:49.293 } 00:17:49.293 } 00:17:49.293 ]' 00:17:49.293 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.293 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.293 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.293 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.293 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.293 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.293 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.293 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.552 20:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:50.119 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.119 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.119 20:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.119 20:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.119 20:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.119 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.119 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.119 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.378 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.637 00:17:50.637 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.637 20:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.637 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.896 { 00:17:50.896 "cntlid": 135, 00:17:50.896 "qid": 0, 00:17:50.896 "state": "enabled", 00:17:50.896 "thread": "nvmf_tgt_poll_group_000", 00:17:50.896 "listen_address": { 00:17:50.896 "trtype": "TCP", 00:17:50.896 "adrfam": "IPv4", 00:17:50.896 "traddr": "10.0.0.2", 00:17:50.896 "trsvcid": "4420" 00:17:50.896 }, 00:17:50.896 "peer_address": { 00:17:50.896 "trtype": "TCP", 00:17:50.896 "adrfam": "IPv4", 00:17:50.896 "traddr": "10.0.0.1", 00:17:50.896 "trsvcid": "58452" 00:17:50.896 }, 00:17:50.896 "auth": { 00:17:50.896 "state": "completed", 00:17:50.896 "digest": "sha512", 00:17:50.896 "dhgroup": "ffdhe6144" 00:17:50.896 } 00:17:50.896 } 00:17:50.896 ]' 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.896 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.155 20:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.723 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.981 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.239 00:17:52.496 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.496 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.496 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.496 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.496 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.496 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.496 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.496 20:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.497 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.497 { 00:17:52.497 "cntlid": 137, 00:17:52.497 "qid": 0, 00:17:52.497 "state": "enabled", 00:17:52.497 "thread": "nvmf_tgt_poll_group_000", 00:17:52.497 "listen_address": { 00:17:52.497 "trtype": "TCP", 00:17:52.497 "adrfam": "IPv4", 00:17:52.497 "traddr": "10.0.0.2", 00:17:52.497 "trsvcid": "4420" 00:17:52.497 }, 00:17:52.497 "peer_address": { 00:17:52.497 "trtype": "TCP", 00:17:52.497 "adrfam": "IPv4", 00:17:52.497 "traddr": "10.0.0.1", 00:17:52.497 "trsvcid": "58472" 00:17:52.497 }, 00:17:52.497 "auth": { 00:17:52.497 "state": "completed", 00:17:52.497 "digest": "sha512", 00:17:52.497 "dhgroup": "ffdhe8192" 00:17:52.497 } 00:17:52.497 } 00:17:52.497 ]' 00:17:52.497 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.497 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.497 20:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.755 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.755 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.755 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.755 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.755 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.755 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:17:53.337 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.337 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.337 20:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.337 20:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.337 20:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.337 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.337 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.337 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.595 20:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.160 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.160 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.160 { 00:17:54.160 "cntlid": 139, 00:17:54.160 "qid": 0, 00:17:54.160 "state": "enabled", 00:17:54.160 "thread": "nvmf_tgt_poll_group_000", 00:17:54.160 "listen_address": { 00:17:54.160 "trtype": "TCP", 00:17:54.160 "adrfam": "IPv4", 00:17:54.160 "traddr": "10.0.0.2", 00:17:54.160 "trsvcid": "4420" 00:17:54.160 }, 00:17:54.160 "peer_address": { 00:17:54.160 "trtype": "TCP", 00:17:54.160 "adrfam": "IPv4", 00:17:54.160 "traddr": "10.0.0.1", 00:17:54.160 "trsvcid": "48254" 00:17:54.160 }, 00:17:54.160 "auth": { 00:17:54.160 "state": "completed", 00:17:54.160 "digest": "sha512", 00:17:54.160 "dhgroup": "ffdhe8192" 00:17:54.160 } 00:17:54.160 } 00:17:54.160 ]' 00:17:54.161 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.419 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.419 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.419 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.419 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.419 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.419 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.419 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.676 20:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZmMxNDEwYWQxN2UxZGI2NDBmMDZlYTY1YTZiMjJmYTPw6OVr: --dhchap-ctrl-secret DHHC-1:02:YjQxYmQzYWVmYzBhNTQ2MTAyYmY1MjkwMDczNzgzN2IyZjUzYTlhOGIxZmM2OWQ0pSq8zA==: 00:17:55.241 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.241 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.242 20:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.808 00:17:55.809 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.809 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.809 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.067 { 00:17:56.067 "cntlid": 141, 00:17:56.067 "qid": 0, 00:17:56.067 "state": "enabled", 00:17:56.067 "thread": "nvmf_tgt_poll_group_000", 00:17:56.067 "listen_address": { 00:17:56.067 "trtype": "TCP", 00:17:56.067 "adrfam": "IPv4", 00:17:56.067 "traddr": "10.0.0.2", 00:17:56.067 "trsvcid": "4420" 00:17:56.067 }, 00:17:56.067 "peer_address": { 00:17:56.067 "trtype": "TCP", 00:17:56.067 "adrfam": "IPv4", 00:17:56.067 "traddr": "10.0.0.1", 00:17:56.067 "trsvcid": "48298" 00:17:56.067 }, 00:17:56.067 "auth": { 00:17:56.067 "state": "completed", 00:17:56.067 "digest": "sha512", 00:17:56.067 "dhgroup": "ffdhe8192" 00:17:56.067 } 00:17:56.067 } 00:17:56.067 ]' 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.067 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.325 20:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YWRhMWQ3ZGYyMzVmMGRhNmQwOWIxOWQxOGY3YTI5ZGU2M2VkNjI3MDQ2ZGI4NjRl9gpe9Q==: --dhchap-ctrl-secret DHHC-1:01:NGNjMWI4NmEzOWFkYzEyYjQ3ODUyZTFiYjdmZjBkMzFwE2Ux: 00:17:56.892 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.892 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.892 20:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.892 20:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.892 20:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.892 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.892 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.892 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.150 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.716 00:17:57.716 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.716 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.716 20:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.716 { 00:17:57.716 "cntlid": 143, 00:17:57.716 "qid": 0, 00:17:57.716 "state": "enabled", 00:17:57.716 "thread": "nvmf_tgt_poll_group_000", 00:17:57.716 "listen_address": { 00:17:57.716 "trtype": "TCP", 00:17:57.716 "adrfam": "IPv4", 00:17:57.716 "traddr": "10.0.0.2", 00:17:57.716 "trsvcid": "4420" 00:17:57.716 }, 00:17:57.716 "peer_address": { 00:17:57.716 "trtype": "TCP", 00:17:57.716 "adrfam": "IPv4", 00:17:57.716 "traddr": "10.0.0.1", 00:17:57.716 "trsvcid": "48320" 00:17:57.716 }, 00:17:57.716 "auth": { 00:17:57.716 "state": "completed", 00:17:57.716 "digest": "sha512", 00:17:57.716 "dhgroup": "ffdhe8192" 00:17:57.716 } 00:17:57.716 } 00:17:57.716 ]' 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.716 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.975 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.975 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.975 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.975 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:58.544 20:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.840 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.432 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.432 { 00:17:59.432 "cntlid": 145, 00:17:59.432 "qid": 0, 00:17:59.432 "state": "enabled", 00:17:59.432 "thread": "nvmf_tgt_poll_group_000", 00:17:59.432 "listen_address": { 00:17:59.432 "trtype": "TCP", 00:17:59.432 "adrfam": "IPv4", 00:17:59.432 "traddr": "10.0.0.2", 00:17:59.432 "trsvcid": "4420" 00:17:59.432 }, 00:17:59.432 "peer_address": { 00:17:59.432 "trtype": "TCP", 00:17:59.432 "adrfam": "IPv4", 00:17:59.432 "traddr": "10.0.0.1", 00:17:59.432 "trsvcid": "48366" 00:17:59.432 }, 00:17:59.432 "auth": { 00:17:59.432 "state": "completed", 00:17:59.432 "digest": "sha512", 00:17:59.432 "dhgroup": "ffdhe8192" 00:17:59.432 } 00:17:59.432 } 00:17:59.432 ]' 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.432 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.691 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.691 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.691 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.691 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.691 20:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.691 20:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzRkYzRlNTdhZTEyMDA0NGJiNmFiNjU1YjVkMTIyNTdlZmYwNmIxYjljNmNkZDMynomgHQ==: --dhchap-ctrl-secret DHHC-1:03:YTM2Nzg0YzRkMzJhOThmZmNjYTY5OTMzYzRlY2FjZWUzMWJkNmFhMWNlZGQ1YWExMmQ2N2VmODM5NzM4MzNkYz1U18s=: 00:18:00.256 20:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.256 20:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.256 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.256 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:00.515 20:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:00.774 request: 00:18:00.774 { 00:18:00.774 "name": "nvme0", 00:18:00.774 "trtype": "tcp", 00:18:00.774 "traddr": "10.0.0.2", 00:18:00.774 "adrfam": "ipv4", 00:18:00.774 "trsvcid": "4420", 00:18:00.774 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:00.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:00.774 "prchk_reftag": false, 00:18:00.774 "prchk_guard": false, 00:18:00.774 "hdgst": false, 00:18:00.774 "ddgst": false, 00:18:00.774 "dhchap_key": "key2", 00:18:00.774 "method": "bdev_nvme_attach_controller", 00:18:00.774 "req_id": 1 00:18:00.774 } 00:18:00.774 Got JSON-RPC error response 00:18:00.774 response: 00:18:00.774 { 00:18:00.774 "code": -5, 00:18:00.774 "message": "Input/output error" 00:18:00.774 } 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.774 20:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:01.341 request: 00:18:01.341 { 00:18:01.341 "name": "nvme0", 00:18:01.341 "trtype": "tcp", 00:18:01.341 "traddr": "10.0.0.2", 00:18:01.341 "adrfam": "ipv4", 00:18:01.341 "trsvcid": "4420", 00:18:01.341 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:01.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:01.341 "prchk_reftag": false, 00:18:01.341 "prchk_guard": false, 00:18:01.341 "hdgst": false, 00:18:01.341 "ddgst": false, 00:18:01.341 "dhchap_key": "key1", 00:18:01.341 "dhchap_ctrlr_key": "ckey2", 00:18:01.341 "method": "bdev_nvme_attach_controller", 00:18:01.341 "req_id": 1 00:18:01.341 } 00:18:01.341 Got JSON-RPC error response 00:18:01.341 response: 00:18:01.341 { 00:18:01.341 "code": -5, 00:18:01.341 "message": "Input/output error" 00:18:01.341 } 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.341 20:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.908 request: 00:18:01.908 { 00:18:01.908 "name": "nvme0", 00:18:01.908 "trtype": "tcp", 00:18:01.908 "traddr": "10.0.0.2", 00:18:01.908 "adrfam": "ipv4", 00:18:01.908 "trsvcid": "4420", 00:18:01.908 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:01.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:01.908 "prchk_reftag": false, 00:18:01.908 "prchk_guard": false, 00:18:01.909 "hdgst": false, 00:18:01.909 "ddgst": false, 00:18:01.909 "dhchap_key": "key1", 00:18:01.909 "dhchap_ctrlr_key": "ckey1", 00:18:01.909 "method": "bdev_nvme_attach_controller", 00:18:01.909 "req_id": 1 00:18:01.909 } 00:18:01.909 Got JSON-RPC error response 00:18:01.909 response: 00:18:01.909 { 00:18:01.909 "code": -5, 00:18:01.909 "message": "Input/output error" 00:18:01.909 } 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2683526 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2683526 ']' 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2683526 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2683526 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2683526' 00:18:01.909 killing process with pid 2683526 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2683526 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2683526 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2704449 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2704449 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2704449 ']' 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.909 20:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2704449 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2704449 ']' 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.844 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.102 20:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.691 00:18:03.691 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.691 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.691 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.951 { 00:18:03.951 "cntlid": 1, 00:18:03.951 "qid": 0, 00:18:03.951 "state": "enabled", 00:18:03.951 "thread": "nvmf_tgt_poll_group_000", 00:18:03.951 "listen_address": { 00:18:03.951 "trtype": "TCP", 00:18:03.951 "adrfam": "IPv4", 00:18:03.951 "traddr": "10.0.0.2", 00:18:03.951 "trsvcid": "4420" 00:18:03.951 }, 00:18:03.951 "peer_address": { 00:18:03.951 "trtype": "TCP", 00:18:03.951 "adrfam": "IPv4", 00:18:03.951 "traddr": "10.0.0.1", 00:18:03.951 "trsvcid": "42402" 00:18:03.951 }, 00:18:03.951 "auth": { 00:18:03.951 "state": "completed", 00:18:03.951 "digest": "sha512", 00:18:03.951 "dhgroup": "ffdhe8192" 00:18:03.951 } 00:18:03.951 } 00:18:03.951 ]' 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.951 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.209 20:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:Y2Y2M2NhMTIzMGM4YTY3NGZhMzRhMmE2YzdmZGU4Y2RkYjZkOTY2YjFkYmIzYjZmMDVlNWNiNWRmZTM0MWUxNqx2SvY=: 00:18:04.775 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.775 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.775 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.775 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.775 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.775 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:04.776 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.776 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.776 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.776 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:04.776 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.035 request: 00:18:05.035 { 00:18:05.035 "name": "nvme0", 00:18:05.035 "trtype": "tcp", 00:18:05.035 "traddr": "10.0.0.2", 00:18:05.035 "adrfam": "ipv4", 00:18:05.035 "trsvcid": "4420", 00:18:05.035 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:05.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.035 "prchk_reftag": false, 00:18:05.035 "prchk_guard": false, 00:18:05.035 "hdgst": false, 00:18:05.035 "ddgst": false, 00:18:05.035 "dhchap_key": "key3", 00:18:05.035 "method": "bdev_nvme_attach_controller", 00:18:05.035 "req_id": 1 00:18:05.035 } 00:18:05.035 Got JSON-RPC error response 00:18:05.035 response: 00:18:05.035 { 00:18:05.035 "code": -5, 00:18:05.035 "message": "Input/output error" 00:18:05.035 } 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:05.035 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.294 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.552 request: 00:18:05.552 { 00:18:05.552 "name": "nvme0", 00:18:05.552 "trtype": "tcp", 00:18:05.552 "traddr": "10.0.0.2", 00:18:05.552 "adrfam": "ipv4", 00:18:05.552 "trsvcid": "4420", 00:18:05.552 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:05.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.552 "prchk_reftag": false, 00:18:05.553 "prchk_guard": false, 00:18:05.553 "hdgst": false, 00:18:05.553 "ddgst": false, 00:18:05.553 "dhchap_key": "key3", 00:18:05.553 "method": "bdev_nvme_attach_controller", 00:18:05.553 "req_id": 1 00:18:05.553 } 00:18:05.553 Got JSON-RPC error response 00:18:05.553 response: 00:18:05.553 { 00:18:05.553 "code": -5, 00:18:05.553 "message": "Input/output error" 00:18:05.553 } 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:05.553 20:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:05.553 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:05.810 request: 00:18:05.810 { 00:18:05.810 "name": "nvme0", 00:18:05.810 "trtype": "tcp", 00:18:05.810 "traddr": "10.0.0.2", 00:18:05.810 "adrfam": "ipv4", 00:18:05.810 "trsvcid": "4420", 00:18:05.810 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:05.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.810 "prchk_reftag": false, 00:18:05.810 "prchk_guard": false, 00:18:05.810 "hdgst": false, 00:18:05.810 "ddgst": false, 00:18:05.810 "dhchap_key": "key0", 00:18:05.810 "dhchap_ctrlr_key": "key1", 00:18:05.810 "method": "bdev_nvme_attach_controller", 00:18:05.810 "req_id": 1 00:18:05.810 } 00:18:05.810 Got JSON-RPC error response 00:18:05.810 response: 00:18:05.810 { 00:18:05.810 "code": -5, 00:18:05.810 "message": "Input/output error" 00:18:05.810 } 00:18:05.811 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:05.811 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:05.811 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:05.811 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:05.811 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:05.811 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:06.068 00:18:06.068 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:06.068 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:06.068 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.327 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.327 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.327 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2683773 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2683773 ']' 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2683773 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2683773 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2683773' 00:18:06.585 killing process with pid 2683773 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2683773 00:18:06.585 20:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2683773 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.844 rmmod nvme_tcp 00:18:06.844 rmmod nvme_fabrics 00:18:06.844 rmmod nvme_keyring 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2704449 ']' 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2704449 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2704449 ']' 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2704449 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2704449 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2704449' 00:18:06.844 killing process with pid 2704449 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2704449 00:18:06.844 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2704449 00:18:07.103 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:07.103 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:07.103 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:07.103 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.103 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.103 20:43:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.103 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.103 20:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.654 20:43:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.654 20:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1rd /tmp/spdk.key-sha256.OgZ /tmp/spdk.key-sha384.mue /tmp/spdk.key-sha512.Bcn /tmp/spdk.key-sha512.HtR /tmp/spdk.key-sha384.7FD /tmp/spdk.key-sha256.rf9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:09.654 00:18:09.654 real 2m10.199s 00:18:09.654 user 5m0.110s 00:18:09.654 sys 0m19.949s 00:18:09.654 20:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.654 20:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.654 ************************************ 00:18:09.654 END TEST nvmf_auth_target 00:18:09.654 ************************************ 00:18:09.654 20:43:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:09.654 20:43:43 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:09.654 20:43:43 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:09.654 20:43:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:09.654 20:43:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.654 20:43:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.654 ************************************ 00:18:09.654 START TEST nvmf_bdevio_no_huge 00:18:09.654 ************************************ 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:09.654 * Looking for test storage... 00:18:09.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.654 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.655 20:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.927 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:14.928 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:14.928 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:14.928 Found net devices under 0000:86:00.0: cvl_0_0 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:14.928 Found net devices under 0000:86:00.1: cvl_0_1 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:14.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:18:14.928 00:18:14.928 --- 10.0.0.2 ping statistics --- 00:18:14.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.928 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:18:14.928 00:18:14.928 --- 10.0.0.1 ping statistics --- 00:18:14.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.928 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2708830 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2708830 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2708830 ']' 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.928 20:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.187 [2024-07-15 20:43:49.423817] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:15.187 [2024-07-15 20:43:49.423863] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:15.187 [2024-07-15 20:43:49.487909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.187 [2024-07-15 20:43:49.573351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.187 [2024-07-15 20:43:49.573391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.187 [2024-07-15 20:43:49.573398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.187 [2024-07-15 20:43:49.573404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.187 [2024-07-15 20:43:49.573409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.187 [2024-07-15 20:43:49.573461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:15.187 [2024-07-15 20:43:49.573572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:15.187 [2024-07-15 20:43:49.573678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:15.187 [2024-07-15 20:43:49.573677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.755 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.755 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:15.755 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.755 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.755 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 [2024-07-15 20:43:50.272099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 Malloc0 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.014 [2024-07-15 20:43:50.308361] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:16.014 { 00:18:16.014 "params": { 00:18:16.014 "name": "Nvme$subsystem", 00:18:16.014 "trtype": "$TEST_TRANSPORT", 00:18:16.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:16.014 "adrfam": "ipv4", 00:18:16.014 "trsvcid": "$NVMF_PORT", 00:18:16.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:16.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:16.014 "hdgst": ${hdgst:-false}, 00:18:16.014 "ddgst": ${ddgst:-false} 00:18:16.014 }, 00:18:16.014 "method": "bdev_nvme_attach_controller" 00:18:16.014 } 00:18:16.014 EOF 00:18:16.014 )") 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:16.014 20:43:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:16.014 "params": { 00:18:16.014 "name": "Nvme1", 00:18:16.014 "trtype": "tcp", 00:18:16.014 "traddr": "10.0.0.2", 00:18:16.014 "adrfam": "ipv4", 00:18:16.015 "trsvcid": "4420", 00:18:16.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.015 "hdgst": false, 00:18:16.015 "ddgst": false 00:18:16.015 }, 00:18:16.015 "method": "bdev_nvme_attach_controller" 00:18:16.015 }' 00:18:16.015 [2024-07-15 20:43:50.359664] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:16.015 [2024-07-15 20:43:50.359711] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2708958 ] 00:18:16.015 [2024-07-15 20:43:50.418636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:16.273 [2024-07-15 20:43:50.505775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.273 [2024-07-15 20:43:50.505873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.273 [2024-07-15 20:43:50.505875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.532 I/O targets: 00:18:16.532 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:16.532 00:18:16.532 00:18:16.532 CUnit - A unit testing framework for C - Version 2.1-3 00:18:16.532 http://cunit.sourceforge.net/ 00:18:16.532 00:18:16.532 00:18:16.532 Suite: bdevio tests on: Nvme1n1 00:18:16.532 Test: blockdev write read block ...passed 00:18:16.532 Test: blockdev write zeroes read block ...passed 00:18:16.532 Test: blockdev write zeroes read no split ...passed 00:18:16.532 Test: blockdev write zeroes read split ...passed 00:18:16.532 Test: blockdev write zeroes read split partial ...passed 00:18:16.532 Test: blockdev reset ...[2024-07-15 20:43:50.973588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.532 [2024-07-15 20:43:50.973650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8300 (9): Bad file descriptor 00:18:16.532 [2024-07-15 20:43:50.984694] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:16.532 passed 00:18:16.532 Test: blockdev write read 8 blocks ...passed 00:18:16.792 Test: blockdev write read size > 128k ...passed 00:18:16.792 Test: blockdev write read invalid size ...passed 00:18:16.792 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:16.792 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:16.792 Test: blockdev write read max offset ...passed 00:18:16.792 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:16.792 Test: blockdev writev readv 8 blocks ...passed 00:18:16.792 Test: blockdev writev readv 30 x 1block ...passed 00:18:16.792 Test: blockdev writev readv block ...passed 00:18:16.792 Test: blockdev writev readv size > 128k ...passed 00:18:16.792 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:16.792 Test: blockdev comparev and writev ...[2024-07-15 20:43:51.199639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.792 [2024-07-15 20:43:51.199667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.792 [2024-07-15 20:43:51.199685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.792 [2024-07-15 20:43:51.199697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.792 [2024-07-15 20:43:51.199985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.792 [2024-07-15 20:43:51.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:16.792 [2024-07-15 20:43:51.200013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.792 [2024-07-15 20:43:51.200025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:16.792 [2024-07-15 20:43:51.200312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.792 [2024-07-15 20:43:51.200325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:16.792 [2024-07-15 20:43:51.200349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.792 [2024-07-15 20:43:51.200361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:16.792 [2024-07-15 20:43:51.200646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.792 [2024-07-15 20:43:51.200657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:16.792 [2024-07-15 20:43:51.200677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.792 [2024-07-15 20:43:51.200688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:16.792 passed 00:18:17.052 Test: blockdev nvme passthru rw ...passed 00:18:17.052 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:43:51.283590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.052 [2024-07-15 20:43:51.283608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.052 [2024-07-15 20:43:51.283761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.052 [2024-07-15 20:43:51.283772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.052 [2024-07-15 20:43:51.283923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.052 [2024-07-15 20:43:51.283934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.052 [2024-07-15 20:43:51.284085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.052 [2024-07-15 20:43:51.284096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.052 passed 00:18:17.052 Test: blockdev nvme admin passthru ...passed 00:18:17.052 Test: blockdev copy ...passed 00:18:17.052 00:18:17.052 Run Summary: Type Total Ran Passed Failed Inactive 00:18:17.052 suites 1 1 n/a 0 0 00:18:17.052 tests 23 23 23 0 0 00:18:17.052 asserts 152 152 152 0 n/a 00:18:17.052 00:18:17.052 Elapsed time = 1.139 seconds 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.311 rmmod nvme_tcp 00:18:17.311 rmmod nvme_fabrics 00:18:17.311 rmmod nvme_keyring 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2708830 ']' 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2708830 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2708830 ']' 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2708830 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2708830 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2708830' 00:18:17.311 killing process with pid 2708830 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2708830 00:18:17.311 20:43:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2708830 00:18:17.570 20:43:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.570 20:43:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.570 20:43:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.570 20:43:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.570 20:43:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.570 20:43:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.570 20:43:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.570 20:43:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.111 20:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.111 00:18:20.111 real 0m10.477s 00:18:20.111 user 0m13.461s 00:18:20.111 sys 0m5.103s 00:18:20.111 20:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.111 20:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.111 ************************************ 00:18:20.111 END TEST nvmf_bdevio_no_huge 00:18:20.111 ************************************ 00:18:20.111 20:43:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:20.111 20:43:54 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:20.111 20:43:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:20.111 20:43:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.111 20:43:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:20.111 ************************************ 00:18:20.111 START TEST nvmf_tls 00:18:20.111 ************************************ 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:20.111 * Looking for test storage... 00:18:20.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.111 20:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:25.409 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:25.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:25.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:25.410 Found net devices under 0000:86:00.0: cvl_0_0 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:25.410 Found net devices under 0000:86:00.1: cvl_0_1 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:25.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:18:25.410 00:18:25.410 --- 10.0.0.2 ping statistics --- 00:18:25.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.410 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:18:25.410 00:18:25.410 --- 10.0.0.1 ping statistics --- 00:18:25.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.410 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2712706 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2712706 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2712706 ']' 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.410 20:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.410 [2024-07-15 20:43:59.481611] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:25.410 [2024-07-15 20:43:59.481654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.410 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.410 [2024-07-15 20:43:59.538005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.410 [2024-07-15 20:43:59.617074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.410 [2024-07-15 20:43:59.617108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.410 [2024-07-15 20:43:59.617115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.410 [2024-07-15 20:43:59.617121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.410 [2024-07-15 20:43:59.617126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.410 [2024-07-15 20:43:59.617144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.977 20:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.977 20:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:25.977 20:44:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.977 20:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:25.977 20:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.977 20:44:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.977 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:25.977 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:26.236 true 00:18:26.236 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.236 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:26.236 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:26.236 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:26.236 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:26.494 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.494 20:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:26.752 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:26.752 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:26.752 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:26.752 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.752 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:27.011 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:27.011 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:27.011 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.011 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:27.269 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:27.269 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:27.269 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:27.269 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.269 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:27.528 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:27.528 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:27.528 20:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:27.787 20:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.MjBY9O4EsQ 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.zavGk8KZe4 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.MjBY9O4EsQ 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zavGk8KZe4 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:28.045 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:28.303 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.MjBY9O4EsQ 00:18:28.303 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MjBY9O4EsQ 00:18:28.303 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.562 [2024-07-15 20:44:02.865568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.562 20:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:28.562 20:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.819 [2024-07-15 20:44:03.178359] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.819 [2024-07-15 20:44:03.178554] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.819 20:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:29.076 malloc0 00:18:29.076 20:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.076 20:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MjBY9O4EsQ 00:18:29.334 [2024-07-15 20:44:03.703915] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:29.334 20:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MjBY9O4EsQ 00:18:29.334 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.536 Initializing NVMe Controllers 00:18:41.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:41.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:41.536 Initialization complete. Launching workers. 00:18:41.536 ======================================================== 00:18:41.536 Latency(us) 00:18:41.536 Device Information : IOPS MiB/s Average min max 00:18:41.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16563.10 64.70 3864.33 851.35 7484.98 00:18:41.536 ======================================================== 00:18:41.536 Total : 16563.10 64.70 3864.33 851.35 7484.98 00:18:41.536 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MjBY9O4EsQ 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MjBY9O4EsQ' 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2715054 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2715054 /var/tmp/bdevperf.sock 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2715054 ']' 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.536 20:44:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.536 [2024-07-15 20:44:13.851721] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:41.536 [2024-07-15 20:44:13.851768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715054 ] 00:18:41.536 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.536 [2024-07-15 20:44:13.900483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.536 [2024-07-15 20:44:13.972117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.536 20:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.536 20:44:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:41.536 20:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MjBY9O4EsQ 00:18:41.536 [2024-07-15 20:44:14.819128] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.536 [2024-07-15 20:44:14.819199] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:41.536 TLSTESTn1 00:18:41.536 20:44:14 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.536 Running I/O for 10 seconds... 00:18:51.506 00:18:51.506 Latency(us) 00:18:51.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.506 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.506 Verification LBA range: start 0x0 length 0x2000 00:18:51.506 TLSTESTn1 : 10.02 5484.80 21.42 0.00 0.00 23299.85 7265.95 69753.10 00:18:51.506 =================================================================================================================== 00:18:51.506 Total : 5484.80 21.42 0.00 0.00 23299.85 7265.95 69753.10 00:18:51.506 0 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2715054 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2715054 ']' 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2715054 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2715054 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2715054' 00:18:51.506 killing process with pid 2715054 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2715054 00:18:51.506 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.506 00:18:51.506 Latency(us) 00:18:51.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.506 =================================================================================================================== 00:18:51.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.506 [2024-07-15 20:44:25.101030] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2715054 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zavGk8KZe4 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zavGk8KZe4 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zavGk8KZe4 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zavGk8KZe4' 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2716892 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2716892 /var/tmp/bdevperf.sock 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2716892 ']' 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.506 20:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.506 [2024-07-15 20:44:25.331274] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:51.506 [2024-07-15 20:44:25.331322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716892 ] 00:18:51.506 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.506 [2024-07-15 20:44:25.381205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.506 [2024-07-15 20:44:25.448925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.765 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.765 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:51.765 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zavGk8KZe4 00:18:52.025 [2024-07-15 20:44:26.287304] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.025 [2024-07-15 20:44:26.287386] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:52.025 [2024-07-15 20:44:26.297384] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:52.025 [2024-07-15 20:44:26.297711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756570 (107): Transport endpoint is not connected 00:18:52.025 [2024-07-15 20:44:26.298702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1756570 (9): Bad file descriptor 00:18:52.025 [2024-07-15 20:44:26.299702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:52.025 [2024-07-15 20:44:26.299714] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:52.025 [2024-07-15 20:44:26.299726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.025 request: 00:18:52.025 { 00:18:52.025 "name": "TLSTEST", 00:18:52.025 "trtype": "tcp", 00:18:52.025 "traddr": "10.0.0.2", 00:18:52.025 "adrfam": "ipv4", 00:18:52.025 "trsvcid": "4420", 00:18:52.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.025 "prchk_reftag": false, 00:18:52.025 "prchk_guard": false, 00:18:52.025 "hdgst": false, 00:18:52.025 "ddgst": false, 00:18:52.025 "psk": "/tmp/tmp.zavGk8KZe4", 00:18:52.025 "method": "bdev_nvme_attach_controller", 00:18:52.025 "req_id": 1 00:18:52.025 } 00:18:52.025 Got JSON-RPC error response 00:18:52.025 response: 00:18:52.025 { 00:18:52.025 "code": -5, 00:18:52.025 "message": "Input/output error" 00:18:52.025 } 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2716892 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2716892 ']' 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2716892 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716892 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716892' 00:18:52.025 killing process with pid 2716892 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2716892 00:18:52.025 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.025 00:18:52.025 Latency(us) 00:18:52.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.025 =================================================================================================================== 00:18:52.025 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.025 [2024-07-15 20:44:26.356431] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:52.025 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2716892 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MjBY9O4EsQ 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MjBY9O4EsQ 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MjBY9O4EsQ 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MjBY9O4EsQ' 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2717133 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2717133 /var/tmp/bdevperf.sock 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2717133 ']' 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.285 20:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.285 [2024-07-15 20:44:26.576223] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:52.285 [2024-07-15 20:44:26.576284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717133 ] 00:18:52.285 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.285 [2024-07-15 20:44:26.626298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.285 [2024-07-15 20:44:26.693588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.MjBY9O4EsQ 00:18:53.222 [2024-07-15 20:44:27.551955] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.222 [2024-07-15 20:44:27.552035] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:53.222 [2024-07-15 20:44:27.560501] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:53.222 [2024-07-15 20:44:27.560521] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:53.222 [2024-07-15 20:44:27.560544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:53.222 [2024-07-15 20:44:27.561305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e1570 (107): Transport endpoint is not connected 00:18:53.222 [2024-07-15 20:44:27.562295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e1570 (9): Bad file descriptor 00:18:53.222 [2024-07-15 20:44:27.563295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:53.222 [2024-07-15 20:44:27.563306] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:53.222 [2024-07-15 20:44:27.563318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:53.222 request: 00:18:53.222 { 00:18:53.222 "name": "TLSTEST", 00:18:53.222 "trtype": "tcp", 00:18:53.222 "traddr": "10.0.0.2", 00:18:53.222 "adrfam": "ipv4", 00:18:53.222 "trsvcid": "4420", 00:18:53.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.222 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:53.222 "prchk_reftag": false, 00:18:53.222 "prchk_guard": false, 00:18:53.222 "hdgst": false, 00:18:53.222 "ddgst": false, 00:18:53.222 "psk": "/tmp/tmp.MjBY9O4EsQ", 00:18:53.222 "method": "bdev_nvme_attach_controller", 00:18:53.222 "req_id": 1 00:18:53.222 } 00:18:53.222 Got JSON-RPC error response 00:18:53.222 response: 00:18:53.222 { 00:18:53.222 "code": -5, 00:18:53.222 "message": "Input/output error" 00:18:53.222 } 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2717133 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2717133 ']' 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2717133 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2717133 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2717133' 00:18:53.222 killing process with pid 2717133 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2717133 00:18:53.222 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.222 00:18:53.222 Latency(us) 00:18:53.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.222 =================================================================================================================== 00:18:53.222 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:53.222 [2024-07-15 20:44:27.624950] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:53.222 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2717133 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MjBY9O4EsQ 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MjBY9O4EsQ 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MjBY9O4EsQ 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MjBY9O4EsQ' 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2717371 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2717371 /var/tmp/bdevperf.sock 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2717371 ']' 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.482 20:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.482 [2024-07-15 20:44:27.847460] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:53.482 [2024-07-15 20:44:27.847509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717371 ] 00:18:53.482 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.482 [2024-07-15 20:44:27.897278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.482 [2024-07-15 20:44:27.964614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MjBY9O4EsQ 00:18:54.420 [2024-07-15 20:44:28.810247] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.420 [2024-07-15 20:44:28.810323] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:54.420 [2024-07-15 20:44:28.814797] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:54.420 [2024-07-15 20:44:28.814816] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:54.420 [2024-07-15 20:44:28.814837] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:54.420 [2024-07-15 20:44:28.815577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eac570 (107): Transport endpoint is not connected 00:18:54.420 [2024-07-15 20:44:28.816565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eac570 (9): Bad file descriptor 00:18:54.420 [2024-07-15 20:44:28.817566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:54.420 [2024-07-15 20:44:28.817578] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:54.420 [2024-07-15 20:44:28.817590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:54.420 request: 00:18:54.420 { 00:18:54.420 "name": "TLSTEST", 00:18:54.420 "trtype": "tcp", 00:18:54.420 "traddr": "10.0.0.2", 00:18:54.420 "adrfam": "ipv4", 00:18:54.420 "trsvcid": "4420", 00:18:54.420 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:54.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.420 "prchk_reftag": false, 00:18:54.420 "prchk_guard": false, 00:18:54.420 "hdgst": false, 00:18:54.420 "ddgst": false, 00:18:54.420 "psk": "/tmp/tmp.MjBY9O4EsQ", 00:18:54.420 "method": "bdev_nvme_attach_controller", 00:18:54.420 "req_id": 1 00:18:54.420 } 00:18:54.420 Got JSON-RPC error response 00:18:54.420 response: 00:18:54.420 { 00:18:54.420 "code": -5, 00:18:54.420 "message": "Input/output error" 00:18:54.420 } 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2717371 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2717371 ']' 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2717371 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:54.420 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2717371 00:18:54.421 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:54.421 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:54.421 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2717371' 00:18:54.421 killing process with pid 2717371 00:18:54.421 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2717371 00:18:54.421 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.421 00:18:54.421 Latency(us) 00:18:54.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.421 =================================================================================================================== 00:18:54.421 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.421 [2024-07-15 20:44:28.882611] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:54.421 20:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2717371 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2717603 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2717603 /var/tmp/bdevperf.sock 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2717603 ']' 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.680 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.680 [2024-07-15 20:44:29.104643] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:54.680 [2024-07-15 20:44:29.104690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717603 ] 00:18:54.680 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.680 [2024-07-15 20:44:29.154267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.939 [2024-07-15 20:44:29.221803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.506 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.506 20:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:55.506 20:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:55.807 [2024-07-15 20:44:30.066908] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:55.807 [2024-07-15 20:44:30.068637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca2af0 (9): Bad file descriptor 00:18:55.807 [2024-07-15 20:44:30.069635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.807 [2024-07-15 20:44:30.069649] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:55.807 [2024-07-15 20:44:30.069662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.807 request: 00:18:55.807 { 00:18:55.807 "name": "TLSTEST", 00:18:55.807 "trtype": "tcp", 00:18:55.807 "traddr": "10.0.0.2", 00:18:55.807 "adrfam": "ipv4", 00:18:55.807 "trsvcid": "4420", 00:18:55.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.807 "prchk_reftag": false, 00:18:55.807 "prchk_guard": false, 00:18:55.807 "hdgst": false, 00:18:55.807 "ddgst": false, 00:18:55.807 "method": "bdev_nvme_attach_controller", 00:18:55.807 "req_id": 1 00:18:55.807 } 00:18:55.807 Got JSON-RPC error response 00:18:55.807 response: 00:18:55.807 { 00:18:55.807 "code": -5, 00:18:55.807 "message": "Input/output error" 00:18:55.807 } 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2717603 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2717603 ']' 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2717603 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2717603 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2717603' 00:18:55.807 killing process with pid 2717603 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2717603 00:18:55.807 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.807 00:18:55.807 Latency(us) 00:18:55.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.807 =================================================================================================================== 00:18:55.807 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.807 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2717603 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2712706 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2712706 ']' 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2712706 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2712706 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2712706' 00:18:56.068 killing process with pid 2712706 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2712706 00:18:56.068 [2024-07-15 20:44:30.361445] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:56.068 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2712706 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.YP6M1gS6Yz 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.YP6M1gS6Yz 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2717861 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2717861 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2717861 ']' 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.327 20:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.327 [2024-07-15 20:44:30.664925] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:56.327 [2024-07-15 20:44:30.664979] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.327 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.327 [2024-07-15 20:44:30.719154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.327 [2024-07-15 20:44:30.789982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.327 [2024-07-15 20:44:30.790021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.327 [2024-07-15 20:44:30.790028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.327 [2024-07-15 20:44:30.790034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.327 [2024-07-15 20:44:30.790039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.327 [2024-07-15 20:44:30.790073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.YP6M1gS6Yz 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YP6M1gS6Yz 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.264 [2024-07-15 20:44:31.653308] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.264 20:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.523 20:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.523 [2024-07-15 20:44:31.986151] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.523 [2024-07-15 20:44:31.986355] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.523 20:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:57.782 malloc0 00:18:57.782 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YP6M1gS6Yz 00:18:58.041 [2024-07-15 20:44:32.491717] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YP6M1gS6Yz 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YP6M1gS6Yz' 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2718120 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2718120 /var/tmp/bdevperf.sock 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2718120 ']' 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.041 20:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.299 [2024-07-15 20:44:32.535382] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:18:58.299 [2024-07-15 20:44:32.535432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718120 ] 00:18:58.299 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.299 [2024-07-15 20:44:32.583821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.299 [2024-07-15 20:44:32.655257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.299 20:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.299 20:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:58.299 20:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YP6M1gS6Yz 00:18:58.558 [2024-07-15 20:44:32.912216] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.558 [2024-07-15 20:44:32.912291] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:58.558 TLSTESTn1 00:18:58.558 20:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:58.815 Running I/O for 10 seconds... 00:19:08.782 00:19:08.782 Latency(us) 00:19:08.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.782 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.782 Verification LBA range: start 0x0 length 0x2000 00:19:08.782 TLSTESTn1 : 10.01 5407.99 21.12 0.00 0.00 23628.87 6069.20 48781.58 00:19:08.782 =================================================================================================================== 00:19:08.782 Total : 5407.99 21.12 0.00 0.00 23628.87 6069.20 48781.58 00:19:08.782 0 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2718120 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2718120 ']' 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2718120 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2718120 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2718120' 00:19:08.782 killing process with pid 2718120 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2718120 00:19:08.782 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.782 00:19:08.782 Latency(us) 00:19:08.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.782 =================================================================================================================== 00:19:08.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.782 [2024-07-15 20:44:43.196382] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:08.782 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2718120 00:19:09.041 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.YP6M1gS6Yz 00:19:09.041 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YP6M1gS6Yz 00:19:09.041 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YP6M1gS6Yz 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YP6M1gS6Yz 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YP6M1gS6Yz' 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2719947 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2719947 /var/tmp/bdevperf.sock 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2719947 ']' 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.042 20:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 [2024-07-15 20:44:43.428604] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:09.042 [2024-07-15 20:44:43.428652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719947 ] 00:19:09.042 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.042 [2024-07-15 20:44:43.478846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.304 [2024-07-15 20:44:43.556559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.875 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.875 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:09.875 20:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YP6M1gS6Yz 00:19:10.134 [2024-07-15 20:44:44.391475] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.134 [2024-07-15 20:44:44.391522] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:10.134 [2024-07-15 20:44:44.391531] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.YP6M1gS6Yz 00:19:10.134 request: 00:19:10.134 { 00:19:10.134 "name": "TLSTEST", 00:19:10.134 "trtype": "tcp", 00:19:10.134 "traddr": "10.0.0.2", 00:19:10.134 "adrfam": "ipv4", 00:19:10.134 "trsvcid": "4420", 00:19:10.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.134 "prchk_reftag": false, 00:19:10.134 "prchk_guard": false, 00:19:10.134 "hdgst": false, 00:19:10.134 "ddgst": false, 00:19:10.134 "psk": "/tmp/tmp.YP6M1gS6Yz", 00:19:10.134 "method": "bdev_nvme_attach_controller", 00:19:10.134 "req_id": 1 00:19:10.134 } 00:19:10.134 Got JSON-RPC error response 00:19:10.134 response: 00:19:10.134 { 00:19:10.134 "code": -1, 00:19:10.134 "message": "Operation not permitted" 00:19:10.134 } 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2719947 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2719947 ']' 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2719947 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2719947 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2719947' 00:19:10.134 killing process with pid 2719947 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2719947 00:19:10.134 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.134 00:19:10.134 Latency(us) 00:19:10.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.134 =================================================================================================================== 00:19:10.134 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:10.134 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2719947 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2717861 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2717861 ']' 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2717861 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2717861 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2717861' 00:19:10.393 killing process with pid 2717861 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2717861 00:19:10.393 [2024-07-15 20:44:44.667697] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2717861 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2720199 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2720199 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2720199 ']' 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.393 20:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 [2024-07-15 20:44:44.909523] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:10.651 [2024-07-15 20:44:44.909572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.651 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.651 [2024-07-15 20:44:44.963928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.652 [2024-07-15 20:44:45.041633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.652 [2024-07-15 20:44:45.041667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.652 [2024-07-15 20:44:45.041674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.652 [2024-07-15 20:44:45.041680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.652 [2024-07-15 20:44:45.041684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.652 [2024-07-15 20:44:45.041716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.YP6M1gS6Yz 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.YP6M1gS6Yz 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.YP6M1gS6Yz 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YP6M1gS6Yz 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.588 [2024-07-15 20:44:45.907709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.588 20:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:11.847 20:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:11.847 [2024-07-15 20:44:46.252588] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.847 [2024-07-15 20:44:46.252795] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.847 20:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:12.106 malloc0 00:19:12.106 20:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YP6M1gS6Yz 00:19:12.365 [2024-07-15 20:44:46.778008] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:12.365 [2024-07-15 20:44:46.778032] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:12.365 [2024-07-15 20:44:46.778053] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:12.365 request: 00:19:12.365 { 00:19:12.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.365 "host": "nqn.2016-06.io.spdk:host1", 00:19:12.365 "psk": "/tmp/tmp.YP6M1gS6Yz", 00:19:12.365 "method": "nvmf_subsystem_add_host", 00:19:12.365 "req_id": 1 00:19:12.365 } 00:19:12.365 Got JSON-RPC error response 00:19:12.365 response: 00:19:12.365 { 00:19:12.365 "code": -32603, 00:19:12.365 "message": "Internal error" 00:19:12.365 } 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2720199 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2720199 ']' 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2720199 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2720199 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2720199' 00:19:12.365 killing process with pid 2720199 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2720199 00:19:12.365 20:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2720199 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.YP6M1gS6Yz 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2720484 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2720484 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2720484 ']' 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.624 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.624 [2024-07-15 20:44:47.090407] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:12.624 [2024-07-15 20:44:47.090454] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.882 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.882 [2024-07-15 20:44:47.145593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.882 [2024-07-15 20:44:47.224056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.882 [2024-07-15 20:44:47.224093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.882 [2024-07-15 20:44:47.224100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.882 [2024-07-15 20:44:47.224110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.882 [2024-07-15 20:44:47.224114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.882 [2024-07-15 20:44:47.224130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.YP6M1gS6Yz 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YP6M1gS6Yz 00:19:13.449 20:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:13.707 [2024-07-15 20:44:48.075629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.707 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:13.966 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:13.966 [2024-07-15 20:44:48.424527] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.966 [2024-07-15 20:44:48.424718] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.966 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:14.224 malloc0 00:19:14.224 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YP6M1gS6Yz 00:19:14.482 [2024-07-15 20:44:48.930019] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2720942 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2720942 /var/tmp/bdevperf.sock 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2720942 ']' 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.482 20:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.741 [2024-07-15 20:44:48.989318] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:14.741 [2024-07-15 20:44:48.989366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720942 ] 00:19:14.741 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.741 [2024-07-15 20:44:49.039855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.741 [2024-07-15 20:44:49.117919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.308 20:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.308 20:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.308 20:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YP6M1gS6Yz 00:19:15.567 [2024-07-15 20:44:49.948844] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.567 [2024-07-15 20:44:49.948923] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:15.567 TLSTESTn1 00:19:15.567 20:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:15.825 20:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:15.825 "subsystems": [ 00:19:15.825 { 00:19:15.825 "subsystem": "keyring", 00:19:15.825 "config": [] 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "subsystem": "iobuf", 00:19:15.825 "config": [ 00:19:15.825 { 00:19:15.825 "method": "iobuf_set_options", 00:19:15.825 "params": { 00:19:15.825 "small_pool_count": 8192, 00:19:15.825 "large_pool_count": 1024, 00:19:15.825 "small_bufsize": 8192, 00:19:15.825 "large_bufsize": 135168 00:19:15.825 } 00:19:15.825 } 00:19:15.825 ] 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "subsystem": "sock", 00:19:15.825 "config": [ 00:19:15.825 { 00:19:15.825 "method": "sock_set_default_impl", 00:19:15.825 "params": { 00:19:15.825 "impl_name": "posix" 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "sock_impl_set_options", 00:19:15.825 "params": { 00:19:15.825 "impl_name": "ssl", 00:19:15.825 "recv_buf_size": 4096, 00:19:15.825 "send_buf_size": 4096, 00:19:15.825 "enable_recv_pipe": true, 00:19:15.825 "enable_quickack": false, 00:19:15.825 "enable_placement_id": 0, 00:19:15.825 "enable_zerocopy_send_server": true, 00:19:15.825 "enable_zerocopy_send_client": false, 00:19:15.825 "zerocopy_threshold": 0, 00:19:15.825 "tls_version": 0, 00:19:15.825 "enable_ktls": false 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "sock_impl_set_options", 00:19:15.825 "params": { 00:19:15.825 "impl_name": "posix", 00:19:15.825 "recv_buf_size": 2097152, 00:19:15.825 "send_buf_size": 2097152, 00:19:15.825 "enable_recv_pipe": true, 00:19:15.825 "enable_quickack": false, 00:19:15.825 "enable_placement_id": 0, 00:19:15.825 "enable_zerocopy_send_server": true, 00:19:15.825 "enable_zerocopy_send_client": false, 00:19:15.825 "zerocopy_threshold": 0, 00:19:15.825 "tls_version": 0, 00:19:15.825 "enable_ktls": false 00:19:15.825 } 00:19:15.825 } 00:19:15.825 ] 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "subsystem": "vmd", 00:19:15.825 "config": [] 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "subsystem": "accel", 00:19:15.825 "config": [ 00:19:15.825 { 00:19:15.825 "method": "accel_set_options", 00:19:15.825 "params": { 00:19:15.825 "small_cache_size": 128, 00:19:15.825 "large_cache_size": 16, 00:19:15.825 "task_count": 2048, 00:19:15.825 "sequence_count": 2048, 00:19:15.825 "buf_count": 2048 00:19:15.825 } 00:19:15.825 } 00:19:15.825 ] 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "subsystem": "bdev", 00:19:15.825 "config": [ 00:19:15.825 { 00:19:15.825 "method": "bdev_set_options", 00:19:15.825 "params": { 00:19:15.825 "bdev_io_pool_size": 65535, 00:19:15.825 "bdev_io_cache_size": 256, 00:19:15.825 "bdev_auto_examine": true, 00:19:15.825 "iobuf_small_cache_size": 128, 00:19:15.825 "iobuf_large_cache_size": 16 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "bdev_raid_set_options", 00:19:15.825 "params": { 00:19:15.825 "process_window_size_kb": 1024 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "bdev_iscsi_set_options", 00:19:15.825 "params": { 00:19:15.825 "timeout_sec": 30 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "bdev_nvme_set_options", 00:19:15.825 "params": { 00:19:15.825 "action_on_timeout": "none", 00:19:15.825 "timeout_us": 0, 00:19:15.825 "timeout_admin_us": 0, 00:19:15.825 "keep_alive_timeout_ms": 10000, 00:19:15.825 "arbitration_burst": 0, 00:19:15.825 "low_priority_weight": 0, 00:19:15.825 "medium_priority_weight": 0, 00:19:15.825 "high_priority_weight": 0, 00:19:15.825 "nvme_adminq_poll_period_us": 10000, 00:19:15.825 "nvme_ioq_poll_period_us": 0, 00:19:15.825 "io_queue_requests": 0, 00:19:15.825 "delay_cmd_submit": true, 00:19:15.825 "transport_retry_count": 4, 00:19:15.825 "bdev_retry_count": 3, 00:19:15.825 "transport_ack_timeout": 0, 00:19:15.825 "ctrlr_loss_timeout_sec": 0, 00:19:15.825 "reconnect_delay_sec": 0, 00:19:15.825 "fast_io_fail_timeout_sec": 0, 00:19:15.825 "disable_auto_failback": false, 00:19:15.825 "generate_uuids": false, 00:19:15.825 "transport_tos": 0, 00:19:15.825 "nvme_error_stat": false, 00:19:15.825 "rdma_srq_size": 0, 00:19:15.825 "io_path_stat": false, 00:19:15.825 "allow_accel_sequence": false, 00:19:15.825 "rdma_max_cq_size": 0, 00:19:15.825 "rdma_cm_event_timeout_ms": 0, 00:19:15.825 "dhchap_digests": [ 00:19:15.825 "sha256", 00:19:15.825 "sha384", 00:19:15.825 "sha512" 00:19:15.825 ], 00:19:15.825 "dhchap_dhgroups": [ 00:19:15.825 "null", 00:19:15.825 "ffdhe2048", 00:19:15.825 "ffdhe3072", 00:19:15.825 "ffdhe4096", 00:19:15.825 "ffdhe6144", 00:19:15.825 "ffdhe8192" 00:19:15.825 ] 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "bdev_nvme_set_hotplug", 00:19:15.825 "params": { 00:19:15.825 "period_us": 100000, 00:19:15.825 "enable": false 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "bdev_malloc_create", 00:19:15.825 "params": { 00:19:15.825 "name": "malloc0", 00:19:15.825 "num_blocks": 8192, 00:19:15.825 "block_size": 4096, 00:19:15.825 "physical_block_size": 4096, 00:19:15.825 "uuid": "720050ba-cc06-4166-af5e-1f97ef3b8f7f", 00:19:15.825 "optimal_io_boundary": 0 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "bdev_wait_for_examine" 00:19:15.825 } 00:19:15.825 ] 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "subsystem": "nbd", 00:19:15.825 "config": [] 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "subsystem": "scheduler", 00:19:15.825 "config": [ 00:19:15.825 { 00:19:15.825 "method": "framework_set_scheduler", 00:19:15.825 "params": { 00:19:15.825 "name": "static" 00:19:15.825 } 00:19:15.825 } 00:19:15.825 ] 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "subsystem": "nvmf", 00:19:15.825 "config": [ 00:19:15.825 { 00:19:15.825 "method": "nvmf_set_config", 00:19:15.825 "params": { 00:19:15.825 "discovery_filter": "match_any", 00:19:15.825 "admin_cmd_passthru": { 00:19:15.825 "identify_ctrlr": false 00:19:15.825 } 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "nvmf_set_max_subsystems", 00:19:15.825 "params": { 00:19:15.825 "max_subsystems": 1024 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "nvmf_set_crdt", 00:19:15.825 "params": { 00:19:15.825 "crdt1": 0, 00:19:15.825 "crdt2": 0, 00:19:15.825 "crdt3": 0 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "nvmf_create_transport", 00:19:15.825 "params": { 00:19:15.825 "trtype": "TCP", 00:19:15.825 "max_queue_depth": 128, 00:19:15.825 "max_io_qpairs_per_ctrlr": 127, 00:19:15.825 "in_capsule_data_size": 4096, 00:19:15.825 "max_io_size": 131072, 00:19:15.825 "io_unit_size": 131072, 00:19:15.825 "max_aq_depth": 128, 00:19:15.825 "num_shared_buffers": 511, 00:19:15.825 "buf_cache_size": 4294967295, 00:19:15.825 "dif_insert_or_strip": false, 00:19:15.825 "zcopy": false, 00:19:15.825 "c2h_success": false, 00:19:15.825 "sock_priority": 0, 00:19:15.825 "abort_timeout_sec": 1, 00:19:15.825 "ack_timeout": 0, 00:19:15.825 "data_wr_pool_size": 0 00:19:15.825 } 00:19:15.825 }, 00:19:15.825 { 00:19:15.825 "method": "nvmf_create_subsystem", 00:19:15.825 "params": { 00:19:15.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.825 "allow_any_host": false, 00:19:15.825 "serial_number": "SPDK00000000000001", 00:19:15.825 "model_number": "SPDK bdev Controller", 00:19:15.825 "max_namespaces": 10, 00:19:15.825 "min_cntlid": 1, 00:19:15.825 "max_cntlid": 65519, 00:19:15.825 "ana_reporting": false 00:19:15.825 } 00:19:15.825 }, 00:19:15.826 { 00:19:15.826 "method": "nvmf_subsystem_add_host", 00:19:15.826 "params": { 00:19:15.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.826 "host": "nqn.2016-06.io.spdk:host1", 00:19:15.826 "psk": "/tmp/tmp.YP6M1gS6Yz" 00:19:15.826 } 00:19:15.826 }, 00:19:15.826 { 00:19:15.826 "method": "nvmf_subsystem_add_ns", 00:19:15.826 "params": { 00:19:15.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.826 "namespace": { 00:19:15.826 "nsid": 1, 00:19:15.826 "bdev_name": "malloc0", 00:19:15.826 "nguid": "720050BACC064166AF5E1F97EF3B8F7F", 00:19:15.826 "uuid": "720050ba-cc06-4166-af5e-1f97ef3b8f7f", 00:19:15.826 "no_auto_visible": false 00:19:15.826 } 00:19:15.826 } 00:19:15.826 }, 00:19:15.826 { 00:19:15.826 "method": "nvmf_subsystem_add_listener", 00:19:15.826 "params": { 00:19:15.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.826 "listen_address": { 00:19:15.826 "trtype": "TCP", 00:19:15.826 "adrfam": "IPv4", 00:19:15.826 "traddr": "10.0.0.2", 00:19:15.826 "trsvcid": "4420" 00:19:15.826 }, 00:19:15.826 "secure_channel": true 00:19:15.826 } 00:19:15.826 } 00:19:15.826 ] 00:19:15.826 } 00:19:15.826 ] 00:19:15.826 }' 00:19:15.826 20:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:16.083 20:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:16.083 "subsystems": [ 00:19:16.083 { 00:19:16.083 "subsystem": "keyring", 00:19:16.083 "config": [] 00:19:16.083 }, 00:19:16.083 { 00:19:16.083 "subsystem": "iobuf", 00:19:16.083 "config": [ 00:19:16.083 { 00:19:16.083 "method": "iobuf_set_options", 00:19:16.083 "params": { 00:19:16.083 "small_pool_count": 8192, 00:19:16.083 "large_pool_count": 1024, 00:19:16.083 "small_bufsize": 8192, 00:19:16.083 "large_bufsize": 135168 00:19:16.083 } 00:19:16.083 } 00:19:16.083 ] 00:19:16.083 }, 00:19:16.083 { 00:19:16.084 "subsystem": "sock", 00:19:16.084 "config": [ 00:19:16.084 { 00:19:16.084 "method": "sock_set_default_impl", 00:19:16.084 "params": { 00:19:16.084 "impl_name": "posix" 00:19:16.084 } 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "method": "sock_impl_set_options", 00:19:16.084 "params": { 00:19:16.084 "impl_name": "ssl", 00:19:16.084 "recv_buf_size": 4096, 00:19:16.084 "send_buf_size": 4096, 00:19:16.084 "enable_recv_pipe": true, 00:19:16.084 "enable_quickack": false, 00:19:16.084 "enable_placement_id": 0, 00:19:16.084 "enable_zerocopy_send_server": true, 00:19:16.084 "enable_zerocopy_send_client": false, 00:19:16.084 "zerocopy_threshold": 0, 00:19:16.084 "tls_version": 0, 00:19:16.084 "enable_ktls": false 00:19:16.084 } 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "method": "sock_impl_set_options", 00:19:16.084 "params": { 00:19:16.084 "impl_name": "posix", 00:19:16.084 "recv_buf_size": 2097152, 00:19:16.084 "send_buf_size": 2097152, 00:19:16.084 "enable_recv_pipe": true, 00:19:16.084 "enable_quickack": false, 00:19:16.084 "enable_placement_id": 0, 00:19:16.084 "enable_zerocopy_send_server": true, 00:19:16.084 "enable_zerocopy_send_client": false, 00:19:16.084 "zerocopy_threshold": 0, 00:19:16.084 "tls_version": 0, 00:19:16.084 "enable_ktls": false 00:19:16.084 } 00:19:16.084 } 00:19:16.084 ] 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "subsystem": "vmd", 00:19:16.084 "config": [] 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "subsystem": "accel", 00:19:16.084 "config": [ 00:19:16.084 { 00:19:16.084 "method": "accel_set_options", 00:19:16.084 "params": { 00:19:16.084 "small_cache_size": 128, 00:19:16.084 "large_cache_size": 16, 00:19:16.084 "task_count": 2048, 00:19:16.084 "sequence_count": 2048, 00:19:16.084 "buf_count": 2048 00:19:16.084 } 00:19:16.084 } 00:19:16.084 ] 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "subsystem": "bdev", 00:19:16.084 "config": [ 00:19:16.084 { 00:19:16.084 "method": "bdev_set_options", 00:19:16.084 "params": { 00:19:16.084 "bdev_io_pool_size": 65535, 00:19:16.084 "bdev_io_cache_size": 256, 00:19:16.084 "bdev_auto_examine": true, 00:19:16.084 "iobuf_small_cache_size": 128, 00:19:16.084 "iobuf_large_cache_size": 16 00:19:16.084 } 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "method": "bdev_raid_set_options", 00:19:16.084 "params": { 00:19:16.084 "process_window_size_kb": 1024 00:19:16.084 } 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "method": "bdev_iscsi_set_options", 00:19:16.084 "params": { 00:19:16.084 "timeout_sec": 30 00:19:16.084 } 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "method": "bdev_nvme_set_options", 00:19:16.084 "params": { 00:19:16.084 "action_on_timeout": "none", 00:19:16.084 "timeout_us": 0, 00:19:16.084 "timeout_admin_us": 0, 00:19:16.084 "keep_alive_timeout_ms": 10000, 00:19:16.084 "arbitration_burst": 0, 00:19:16.084 "low_priority_weight": 0, 00:19:16.084 "medium_priority_weight": 0, 00:19:16.084 "high_priority_weight": 0, 00:19:16.084 "nvme_adminq_poll_period_us": 10000, 00:19:16.084 "nvme_ioq_poll_period_us": 0, 00:19:16.084 "io_queue_requests": 512, 00:19:16.084 "delay_cmd_submit": true, 00:19:16.084 "transport_retry_count": 4, 00:19:16.084 "bdev_retry_count": 3, 00:19:16.084 "transport_ack_timeout": 0, 00:19:16.084 "ctrlr_loss_timeout_sec": 0, 00:19:16.084 "reconnect_delay_sec": 0, 00:19:16.084 "fast_io_fail_timeout_sec": 0, 00:19:16.084 "disable_auto_failback": false, 00:19:16.084 "generate_uuids": false, 00:19:16.084 "transport_tos": 0, 00:19:16.084 "nvme_error_stat": false, 00:19:16.084 "rdma_srq_size": 0, 00:19:16.084 "io_path_stat": false, 00:19:16.084 "allow_accel_sequence": false, 00:19:16.084 "rdma_max_cq_size": 0, 00:19:16.084 "rdma_cm_event_timeout_ms": 0, 00:19:16.084 "dhchap_digests": [ 00:19:16.084 "sha256", 00:19:16.084 "sha384", 00:19:16.084 "sha512" 00:19:16.084 ], 00:19:16.084 "dhchap_dhgroups": [ 00:19:16.084 "null", 00:19:16.084 "ffdhe2048", 00:19:16.084 "ffdhe3072", 00:19:16.084 "ffdhe4096", 00:19:16.084 "ffdhe6144", 00:19:16.084 "ffdhe8192" 00:19:16.084 ] 00:19:16.084 } 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "method": "bdev_nvme_attach_controller", 00:19:16.084 "params": { 00:19:16.084 "name": "TLSTEST", 00:19:16.084 "trtype": "TCP", 00:19:16.084 "adrfam": "IPv4", 00:19:16.084 "traddr": "10.0.0.2", 00:19:16.084 "trsvcid": "4420", 00:19:16.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.084 "prchk_reftag": false, 00:19:16.084 "prchk_guard": false, 00:19:16.084 "ctrlr_loss_timeout_sec": 0, 00:19:16.084 "reconnect_delay_sec": 0, 00:19:16.084 "fast_io_fail_timeout_sec": 0, 00:19:16.084 "psk": "/tmp/tmp.YP6M1gS6Yz", 00:19:16.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.084 "hdgst": false, 00:19:16.084 "ddgst": false 00:19:16.084 } 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "method": "bdev_nvme_set_hotplug", 00:19:16.084 "params": { 00:19:16.084 "period_us": 100000, 00:19:16.084 "enable": false 00:19:16.084 } 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "method": "bdev_wait_for_examine" 00:19:16.084 } 00:19:16.084 ] 00:19:16.084 }, 00:19:16.084 { 00:19:16.084 "subsystem": "nbd", 00:19:16.084 "config": [] 00:19:16.084 } 00:19:16.084 ] 00:19:16.084 }' 00:19:16.084 20:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2720942 00:19:16.084 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2720942 ']' 00:19:16.084 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2720942 00:19:16.084 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:16.084 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.084 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2720942 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2720942' 00:19:16.343 killing process with pid 2720942 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2720942 00:19:16.343 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.343 00:19:16.343 Latency(us) 00:19:16.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.343 =================================================================================================================== 00:19:16.343 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.343 [2024-07-15 20:44:50.590619] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2720942 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2720484 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2720484 ']' 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2720484 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2720484 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2720484' 00:19:16.343 killing process with pid 2720484 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2720484 00:19:16.343 [2024-07-15 20:44:50.816084] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:16.343 20:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2720484 00:19:16.602 20:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:16.602 20:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.602 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:16.602 20:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:16.602 "subsystems": [ 00:19:16.602 { 00:19:16.602 "subsystem": "keyring", 00:19:16.602 "config": [] 00:19:16.602 }, 00:19:16.602 { 00:19:16.602 "subsystem": "iobuf", 00:19:16.602 "config": [ 00:19:16.602 { 00:19:16.602 "method": "iobuf_set_options", 00:19:16.602 "params": { 00:19:16.602 "small_pool_count": 8192, 00:19:16.602 "large_pool_count": 1024, 00:19:16.602 "small_bufsize": 8192, 00:19:16.602 "large_bufsize": 135168 00:19:16.602 } 00:19:16.602 } 00:19:16.602 ] 00:19:16.602 }, 00:19:16.602 { 00:19:16.602 "subsystem": "sock", 00:19:16.602 "config": [ 00:19:16.602 { 00:19:16.602 "method": "sock_set_default_impl", 00:19:16.602 "params": { 00:19:16.602 "impl_name": "posix" 00:19:16.602 } 00:19:16.602 }, 00:19:16.602 { 00:19:16.602 "method": "sock_impl_set_options", 00:19:16.602 "params": { 00:19:16.602 "impl_name": "ssl", 00:19:16.602 "recv_buf_size": 4096, 00:19:16.602 "send_buf_size": 4096, 00:19:16.602 "enable_recv_pipe": true, 00:19:16.602 "enable_quickack": false, 00:19:16.602 "enable_placement_id": 0, 00:19:16.602 "enable_zerocopy_send_server": true, 00:19:16.602 "enable_zerocopy_send_client": false, 00:19:16.602 "zerocopy_threshold": 0, 00:19:16.602 "tls_version": 0, 00:19:16.602 "enable_ktls": false 00:19:16.602 } 00:19:16.602 }, 00:19:16.602 { 00:19:16.602 "method": "sock_impl_set_options", 00:19:16.602 "params": { 00:19:16.602 "impl_name": "posix", 00:19:16.602 "recv_buf_size": 2097152, 00:19:16.602 "send_buf_size": 2097152, 00:19:16.602 "enable_recv_pipe": true, 00:19:16.602 "enable_quickack": false, 00:19:16.602 "enable_placement_id": 0, 00:19:16.602 "enable_zerocopy_send_server": true, 00:19:16.602 "enable_zerocopy_send_client": false, 00:19:16.602 "zerocopy_threshold": 0, 00:19:16.602 "tls_version": 0, 00:19:16.602 "enable_ktls": false 00:19:16.602 } 00:19:16.602 } 00:19:16.602 ] 00:19:16.602 }, 00:19:16.602 { 00:19:16.602 "subsystem": "vmd", 00:19:16.602 "config": [] 00:19:16.602 }, 00:19:16.602 { 00:19:16.602 "subsystem": "accel", 00:19:16.602 "config": [ 00:19:16.602 { 00:19:16.602 "method": "accel_set_options", 00:19:16.602 "params": { 00:19:16.602 "small_cache_size": 128, 00:19:16.602 "large_cache_size": 16, 00:19:16.602 "task_count": 2048, 00:19:16.602 "sequence_count": 2048, 00:19:16.602 "buf_count": 2048 00:19:16.602 } 00:19:16.602 } 00:19:16.602 ] 00:19:16.602 }, 00:19:16.602 { 00:19:16.602 "subsystem": "bdev", 00:19:16.603 "config": [ 00:19:16.603 { 00:19:16.603 "method": "bdev_set_options", 00:19:16.603 "params": { 00:19:16.603 "bdev_io_pool_size": 65535, 00:19:16.603 "bdev_io_cache_size": 256, 00:19:16.603 "bdev_auto_examine": true, 00:19:16.603 "iobuf_small_cache_size": 128, 00:19:16.603 "iobuf_large_cache_size": 16 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "bdev_raid_set_options", 00:19:16.603 "params": { 00:19:16.603 "process_window_size_kb": 1024 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "bdev_iscsi_set_options", 00:19:16.603 "params": { 00:19:16.603 "timeout_sec": 30 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "bdev_nvme_set_options", 00:19:16.603 "params": { 00:19:16.603 "action_on_timeout": "none", 00:19:16.603 "timeout_us": 0, 00:19:16.603 "timeout_admin_us": 0, 00:19:16.603 "keep_alive_timeout_ms": 10000, 00:19:16.603 "arbitration_burst": 0, 00:19:16.603 "low_priority_weight": 0, 00:19:16.603 "medium_priority_weight": 0, 00:19:16.603 "high_priority_weight": 0, 00:19:16.603 "nvme_adminq_poll_period_us": 10000, 00:19:16.603 "nvme_ioq_poll_period_us": 0, 00:19:16.603 "io_queue_requests": 0, 00:19:16.603 "delay_cmd_submit": true, 00:19:16.603 "transport_retry_count": 4, 00:19:16.603 "bdev_retry_count": 3, 00:19:16.603 "transport_ack_timeout": 0, 00:19:16.603 "ctrlr_loss_timeout_sec": 0, 00:19:16.603 "reconnect_delay_sec": 0, 00:19:16.603 "fast_io_fail_timeout_sec": 0, 00:19:16.603 "disable_auto_failback": false, 00:19:16.603 "generate_uuids": false, 00:19:16.603 "transport_tos": 0, 00:19:16.603 "nvme_error_stat": false, 00:19:16.603 "rdma_srq_size": 0, 00:19:16.603 "io_path_stat": false, 00:19:16.603 "allow_accel_sequence": false, 00:19:16.603 "rdma_max_cq_size": 0, 00:19:16.603 "rdma_cm_event_timeout_ms": 0, 00:19:16.603 "dhchap_digests": [ 00:19:16.603 "sha256", 00:19:16.603 "sha384", 00:19:16.603 "sha512" 00:19:16.603 ], 00:19:16.603 "dhchap_dhgroups": [ 00:19:16.603 "null", 00:19:16.603 "ffdhe2048", 00:19:16.603 "ffdhe3072", 00:19:16.603 "ffdhe4096", 00:19:16.603 "ffdhe6144", 00:19:16.603 "ffdhe8192" 00:19:16.603 ] 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "bdev_nvme_set_hotplug", 00:19:16.603 "params": { 00:19:16.603 "period_us": 100000, 00:19:16.603 "enable": false 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "bdev_malloc_create", 00:19:16.603 "params": { 00:19:16.603 "name": "malloc0", 00:19:16.603 "num_blocks": 8192, 00:19:16.603 "block_size": 4096, 00:19:16.603 "physical_block_size": 4096, 00:19:16.603 "uuid": "720050ba-cc06-4166-af5e-1f97ef3b8f7f", 00:19:16.603 "optimal_io_boundary": 0 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "bdev_wait_for_examine" 00:19:16.603 } 00:19:16.603 ] 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "subsystem": "nbd", 00:19:16.603 "config": [] 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "subsystem": "scheduler", 00:19:16.603 "config": [ 00:19:16.603 { 00:19:16.603 "method": "framework_set_scheduler", 00:19:16.603 "params": { 00:19:16.603 "name": "static" 00:19:16.603 } 00:19:16.603 } 00:19:16.603 ] 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "subsystem": "nvmf", 00:19:16.603 "config": [ 00:19:16.603 { 00:19:16.603 "method": "nvmf_set_config", 00:19:16.603 "params": { 00:19:16.603 "discovery_filter": "match_any", 00:19:16.603 "admin_cmd_passthru": { 00:19:16.603 "identify_ctrlr": false 00:19:16.603 } 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "nvmf_set_max_subsystems", 00:19:16.603 "params": { 00:19:16.603 "max_subsystems": 1024 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "nvmf_set_crdt", 00:19:16.603 "params": { 00:19:16.603 "crdt1": 0, 00:19:16.603 "crdt2": 0, 00:19:16.603 "crdt3": 0 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "nvmf_create_transport", 00:19:16.603 "params": { 00:19:16.603 "trtype": "TCP", 00:19:16.603 "max_queue_depth": 128, 00:19:16.603 "max_io_qpairs_per_ctrlr": 127, 00:19:16.603 "in_capsule_data_size": 4096, 00:19:16.603 "max_io_size": 131072, 00:19:16.603 "io_unit_size": 131072, 00:19:16.603 "max_aq_depth": 128, 00:19:16.603 "num_shared_buffers": 511, 00:19:16.603 "buf_cache_size": 4294967295, 00:19:16.603 "dif_insert_or_strip": false, 00:19:16.603 "zcopy": false, 00:19:16.603 "c2h_success": false, 00:19:16.603 "sock_priority": 0, 00:19:16.603 "abort_timeout_sec": 1, 00:19:16.603 "ack_timeout": 0, 00:19:16.603 "data_wr_pool_size": 0 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "nvmf_create_subsystem", 00:19:16.603 "params": { 00:19:16.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.603 "allow_any_host": false, 00:19:16.603 "serial_number": "SPDK00000000000001", 00:19:16.603 "model_number": "SPDK bdev Controller", 00:19:16.603 "max_namespaces": 10, 00:19:16.603 "min_cntlid": 1, 00:19:16.603 "max_cntlid": 65519, 00:19:16.603 "ana_reporting": false 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "nvmf_subsystem_add_host", 00:19:16.603 "params": { 00:19:16.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.603 "host": "nqn.2016-06.io.spdk:host1", 00:19:16.603 "psk": "/tmp/tmp.YP6M1gS6Yz" 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "nvmf_subsystem_add_ns", 00:19:16.603 "params": { 00:19:16.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.603 "namespace": { 00:19:16.603 "nsid": 1, 00:19:16.603 "bdev_name": "malloc0", 00:19:16.603 "nguid": "720050BACC064166AF5E1F97EF3B8F7F", 00:19:16.603 "uuid": "720050ba-cc06-4166-af5e-1f97ef3b8f7f", 00:19:16.603 "no_auto_visible": false 00:19:16.603 } 00:19:16.603 } 00:19:16.603 }, 00:19:16.603 { 00:19:16.603 "method": "nvmf_subsystem_add_listener", 00:19:16.603 "params": { 00:19:16.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.603 "listen_address": { 00:19:16.603 "trtype": "TCP", 00:19:16.603 "adrfam": "IPv4", 00:19:16.603 "traddr": "10.0.0.2", 00:19:16.603 "trsvcid": "4420" 00:19:16.603 }, 00:19:16.603 "secure_channel": true 00:19:16.603 } 00:19:16.603 } 00:19:16.603 ] 00:19:16.603 } 00:19:16.603 ] 00:19:16.603 }' 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2721199 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2721199 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2721199 ']' 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.603 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.603 [2024-07-15 20:44:51.060431] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:16.603 [2024-07-15 20:44:51.060477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.862 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.862 [2024-07-15 20:44:51.118429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.862 [2024-07-15 20:44:51.198503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.862 [2024-07-15 20:44:51.198532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.862 [2024-07-15 20:44:51.198539] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.862 [2024-07-15 20:44:51.198545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.862 [2024-07-15 20:44:51.198550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.862 [2024-07-15 20:44:51.198599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.121 [2024-07-15 20:44:51.402153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.121 [2024-07-15 20:44:51.418127] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:17.121 [2024-07-15 20:44:51.434182] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.121 [2024-07-15 20:44:51.449551] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.380 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.380 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.380 20:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.380 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.380 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2721445 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2721445 /var/tmp/bdevperf.sock 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2721445 ']' 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.639 20:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:17.639 "subsystems": [ 00:19:17.639 { 00:19:17.639 "subsystem": "keyring", 00:19:17.639 "config": [] 00:19:17.639 }, 00:19:17.639 { 00:19:17.639 "subsystem": "iobuf", 00:19:17.639 "config": [ 00:19:17.639 { 00:19:17.639 "method": "iobuf_set_options", 00:19:17.639 "params": { 00:19:17.639 "small_pool_count": 8192, 00:19:17.639 "large_pool_count": 1024, 00:19:17.639 "small_bufsize": 8192, 00:19:17.639 "large_bufsize": 135168 00:19:17.639 } 00:19:17.639 } 00:19:17.639 ] 00:19:17.639 }, 00:19:17.639 { 00:19:17.639 "subsystem": "sock", 00:19:17.639 "config": [ 00:19:17.639 { 00:19:17.639 "method": "sock_set_default_impl", 00:19:17.639 "params": { 00:19:17.639 "impl_name": "posix" 00:19:17.639 } 00:19:17.639 }, 00:19:17.639 { 00:19:17.640 "method": "sock_impl_set_options", 00:19:17.640 "params": { 00:19:17.640 "impl_name": "ssl", 00:19:17.640 "recv_buf_size": 4096, 00:19:17.640 "send_buf_size": 4096, 00:19:17.640 "enable_recv_pipe": true, 00:19:17.640 "enable_quickack": false, 00:19:17.640 "enable_placement_id": 0, 00:19:17.640 "enable_zerocopy_send_server": true, 00:19:17.640 "enable_zerocopy_send_client": false, 00:19:17.640 "zerocopy_threshold": 0, 00:19:17.640 "tls_version": 0, 00:19:17.640 "enable_ktls": false 00:19:17.640 } 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "method": "sock_impl_set_options", 00:19:17.640 "params": { 00:19:17.640 "impl_name": "posix", 00:19:17.640 "recv_buf_size": 2097152, 00:19:17.640 "send_buf_size": 2097152, 00:19:17.640 "enable_recv_pipe": true, 00:19:17.640 "enable_quickack": false, 00:19:17.640 "enable_placement_id": 0, 00:19:17.640 "enable_zerocopy_send_server": true, 00:19:17.640 "enable_zerocopy_send_client": false, 00:19:17.640 "zerocopy_threshold": 0, 00:19:17.640 "tls_version": 0, 00:19:17.640 "enable_ktls": false 00:19:17.640 } 00:19:17.640 } 00:19:17.640 ] 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "subsystem": "vmd", 00:19:17.640 "config": [] 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "subsystem": "accel", 00:19:17.640 "config": [ 00:19:17.640 { 00:19:17.640 "method": "accel_set_options", 00:19:17.640 "params": { 00:19:17.640 "small_cache_size": 128, 00:19:17.640 "large_cache_size": 16, 00:19:17.640 "task_count": 2048, 00:19:17.640 "sequence_count": 2048, 00:19:17.640 "buf_count": 2048 00:19:17.640 } 00:19:17.640 } 00:19:17.640 ] 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "subsystem": "bdev", 00:19:17.640 "config": [ 00:19:17.640 { 00:19:17.640 "method": "bdev_set_options", 00:19:17.640 "params": { 00:19:17.640 "bdev_io_pool_size": 65535, 00:19:17.640 "bdev_io_cache_size": 256, 00:19:17.640 "bdev_auto_examine": true, 00:19:17.640 "iobuf_small_cache_size": 128, 00:19:17.640 "iobuf_large_cache_size": 16 00:19:17.640 } 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "method": "bdev_raid_set_options", 00:19:17.640 "params": { 00:19:17.640 "process_window_size_kb": 1024 00:19:17.640 } 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "method": "bdev_iscsi_set_options", 00:19:17.640 "params": { 00:19:17.640 "timeout_sec": 30 00:19:17.640 } 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "method": "bdev_nvme_set_options", 00:19:17.640 "params": { 00:19:17.640 "action_on_timeout": "none", 00:19:17.640 "timeout_us": 0, 00:19:17.640 "timeout_admin_us": 0, 00:19:17.640 "keep_alive_timeout_ms": 10000, 00:19:17.640 "arbitration_burst": 0, 00:19:17.640 "low_priority_weight": 0, 00:19:17.640 "medium_priority_weight": 0, 00:19:17.640 "high_priority_weight": 0, 00:19:17.640 "nvme_adminq_poll_period_us": 10000, 00:19:17.640 "nvme_ioq_poll_period_us": 0, 00:19:17.640 "io_queue_requests": 512, 00:19:17.640 "delay_cmd_submit": true, 00:19:17.640 "transport_retry_count": 4, 00:19:17.640 "bdev_retry_count": 3, 00:19:17.640 "transport_ack_timeout": 0, 00:19:17.640 "ctrlr_loss_timeout_sec": 0, 00:19:17.640 "reconnect_delay_sec": 0, 00:19:17.640 "fast_io_fail_timeout_sec": 0, 00:19:17.640 "disable_auto_failback": false, 00:19:17.640 "generate_uuids": false, 00:19:17.640 "transport_tos": 0, 00:19:17.640 "nvme_error_stat": false, 00:19:17.640 "rdma_srq_size": 0, 00:19:17.640 "io_path_stat": false, 00:19:17.640 "allow_accel_sequence": false, 00:19:17.640 "rdma_max_cq_size": 0, 00:19:17.640 "rdma_cm_event_timeout_ms": 0, 00:19:17.640 "dhchap_digests": [ 00:19:17.640 "sha256", 00:19:17.640 "sha384", 00:19:17.640 "sha512" 00:19:17.640 ], 00:19:17.640 "dhchap_dhgroups": [ 00:19:17.640 "null", 00:19:17.640 "ffdhe2048", 00:19:17.640 "ffdhe3072", 00:19:17.640 "ffdhe4096", 00:19:17.640 "ffdhe6144", 00:19:17.640 "ffdhe8192" 00:19:17.640 ] 00:19:17.640 } 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "method": "bdev_nvme_attach_controller", 00:19:17.640 "params": { 00:19:17.640 "name": "TLSTEST", 00:19:17.640 "trtype": "TCP", 00:19:17.640 "adrfam": "IPv4", 00:19:17.640 "traddr": "10.0.0.2", 00:19:17.640 "trsvcid": "4420", 00:19:17.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.640 "prchk_reftag": false, 00:19:17.640 "prchk_guard": false, 00:19:17.640 "ctrlr_loss_timeout_sec": 0, 00:19:17.640 "reconnect_delay_sec": 0, 00:19:17.640 "fast_io_fail_timeout_sec": 0, 00:19:17.640 "psk": "/tmp/tmp.YP6M1gS6Yz", 00:19:17.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.640 "hdgst": false, 00:19:17.640 "ddgst": false 00:19:17.640 } 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "method": "bdev_nvme_set_hotplug", 00:19:17.640 "params": { 00:19:17.640 "period_us": 100000, 00:19:17.640 "enable": false 00:19:17.640 } 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "method": "bdev_wait_for_examine" 00:19:17.640 } 00:19:17.640 ] 00:19:17.640 }, 00:19:17.640 { 00:19:17.640 "subsystem": "nbd", 00:19:17.640 "config": [] 00:19:17.640 } 00:19:17.640 ] 00:19:17.640 }' 00:19:17.640 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.640 20:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.640 [2024-07-15 20:44:51.933839] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:17.640 [2024-07-15 20:44:51.933884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2721445 ] 00:19:17.640 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.640 [2024-07-15 20:44:51.983292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.640 [2024-07-15 20:44:52.060331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.899 [2024-07-15 20:44:52.201687] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.899 [2024-07-15 20:44:52.201765] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:18.467 20:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.467 20:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:18.467 20:44:52 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:18.467 Running I/O for 10 seconds... 00:19:28.476 00:19:28.476 Latency(us) 00:19:28.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.476 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.476 Verification LBA range: start 0x0 length 0x2000 00:19:28.476 TLSTESTn1 : 10.02 5566.35 21.74 0.00 0.00 22955.20 6639.08 38295.82 00:19:28.476 =================================================================================================================== 00:19:28.476 Total : 5566.35 21.74 0.00 0.00 22955.20 6639.08 38295.82 00:19:28.476 0 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2721445 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2721445 ']' 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2721445 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2721445 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2721445' 00:19:28.476 killing process with pid 2721445 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2721445 00:19:28.476 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.476 00:19:28.476 Latency(us) 00:19:28.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.476 =================================================================================================================== 00:19:28.476 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.476 [2024-07-15 20:45:02.943770] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:28.476 20:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2721445 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2721199 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2721199 ']' 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2721199 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2721199 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2721199' 00:19:28.735 killing process with pid 2721199 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2721199 00:19:28.735 [2024-07-15 20:45:03.171462] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:28.735 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2721199 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2723414 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2723414 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2723414 ']' 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.993 20:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.993 [2024-07-15 20:45:03.417836] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:28.993 [2024-07-15 20:45:03.417880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.993 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.251 [2024-07-15 20:45:03.476386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.251 [2024-07-15 20:45:03.545254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.251 [2024-07-15 20:45:03.545297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.251 [2024-07-15 20:45:03.545304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.251 [2024-07-15 20:45:03.545309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.251 [2024-07-15 20:45:03.545314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.251 [2024-07-15 20:45:03.545331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.YP6M1gS6Yz 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YP6M1gS6Yz 00:19:29.819 20:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:30.078 [2024-07-15 20:45:04.408725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.078 20:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:30.336 20:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:30.336 [2024-07-15 20:45:04.753598] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.336 [2024-07-15 20:45:04.753773] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.336 20:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.595 malloc0 00:19:30.595 20:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.854 20:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YP6M1gS6Yz 00:19:30.854 [2024-07-15 20:45:05.267142] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:30.854 20:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:30.854 20:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2723684 00:19:30.854 20:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.854 20:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2723684 /var/tmp/bdevperf.sock 00:19:30.854 20:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2723684 ']' 00:19:30.854 20:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.855 20:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.855 20:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.855 20:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.855 20:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.855 [2024-07-15 20:45:05.317529] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:30.855 [2024-07-15 20:45:05.317574] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723684 ] 00:19:31.113 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.113 [2024-07-15 20:45:05.371794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.113 [2024-07-15 20:45:05.445943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.113 20:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.113 20:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:31.113 20:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YP6M1gS6Yz 00:19:31.372 20:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:31.630 [2024-07-15 20:45:05.872149] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.630 nvme0n1 00:19:31.630 20:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.630 Running I/O for 1 seconds... 00:19:33.005 00:19:33.005 Latency(us) 00:19:33.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.005 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:33.005 Verification LBA range: start 0x0 length 0x2000 00:19:33.005 nvme0n1 : 1.03 4707.25 18.39 0.00 0.00 26878.66 7265.95 83886.08 00:19:33.005 =================================================================================================================== 00:19:33.005 Total : 4707.25 18.39 0.00 0.00 26878.66 7265.95 83886.08 00:19:33.005 0 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2723684 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2723684 ']' 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2723684 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2723684 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2723684' 00:19:33.005 killing process with pid 2723684 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2723684 00:19:33.005 Received shutdown signal, test time was about 1.000000 seconds 00:19:33.005 00:19:33.005 Latency(us) 00:19:33.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.005 =================================================================================================================== 00:19:33.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2723684 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2723414 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2723414 ']' 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2723414 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2723414 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2723414' 00:19:33.005 killing process with pid 2723414 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2723414 00:19:33.005 [2024-07-15 20:45:07.360098] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:33.005 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2723414 00:19:33.278 20:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:33.278 20:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2724141 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2724141 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2724141 ']' 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.279 20:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.279 [2024-07-15 20:45:07.590576] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:33.279 [2024-07-15 20:45:07.590625] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.279 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.279 [2024-07-15 20:45:07.649672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.279 [2024-07-15 20:45:07.721193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.279 [2024-07-15 20:45:07.721241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.279 [2024-07-15 20:45:07.721249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.279 [2024-07-15 20:45:07.721255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.279 [2024-07-15 20:45:07.721260] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.279 [2024-07-15 20:45:07.721279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.212 [2024-07-15 20:45:08.455954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.212 malloc0 00:19:34.212 [2024-07-15 20:45:08.484122] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.212 [2024-07-15 20:45:08.484314] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2724551 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2724551 /var/tmp/bdevperf.sock 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2724551 ']' 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.212 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.212 [2024-07-15 20:45:08.544112] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:34.212 [2024-07-15 20:45:08.544156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724551 ] 00:19:34.212 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.212 [2024-07-15 20:45:08.597005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.212 [2024-07-15 20:45:08.674319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.470 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.470 20:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:34.470 20:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YP6M1gS6Yz 00:19:34.470 20:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:34.729 [2024-07-15 20:45:09.087818] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.729 nvme0n1 00:19:34.729 20:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.987 Running I/O for 1 seconds... 00:19:35.923 00:19:35.923 Latency(us) 00:19:35.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.923 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:35.923 Verification LBA range: start 0x0 length 0x2000 00:19:35.923 nvme0n1 : 1.02 5350.16 20.90 0.00 0.00 23729.14 6667.58 48325.68 00:19:35.923 =================================================================================================================== 00:19:35.923 Total : 5350.16 20.90 0.00 0.00 23729.14 6667.58 48325.68 00:19:35.923 0 00:19:35.923 20:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:35.923 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.923 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.182 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.182 20:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:36.182 "subsystems": [ 00:19:36.182 { 00:19:36.182 "subsystem": "keyring", 00:19:36.182 "config": [ 00:19:36.182 { 00:19:36.182 "method": "keyring_file_add_key", 00:19:36.182 "params": { 00:19:36.182 "name": "key0", 00:19:36.182 "path": "/tmp/tmp.YP6M1gS6Yz" 00:19:36.182 } 00:19:36.182 } 00:19:36.182 ] 00:19:36.182 }, 00:19:36.182 { 00:19:36.182 "subsystem": "iobuf", 00:19:36.182 "config": [ 00:19:36.182 { 00:19:36.182 "method": "iobuf_set_options", 00:19:36.182 "params": { 00:19:36.182 "small_pool_count": 8192, 00:19:36.182 "large_pool_count": 1024, 00:19:36.182 "small_bufsize": 8192, 00:19:36.182 "large_bufsize": 135168 00:19:36.182 } 00:19:36.182 } 00:19:36.182 ] 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "subsystem": "sock", 00:19:36.183 "config": [ 00:19:36.183 { 00:19:36.183 "method": "sock_set_default_impl", 00:19:36.183 "params": { 00:19:36.183 "impl_name": "posix" 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "sock_impl_set_options", 00:19:36.183 "params": { 00:19:36.183 "impl_name": "ssl", 00:19:36.183 "recv_buf_size": 4096, 00:19:36.183 "send_buf_size": 4096, 00:19:36.183 "enable_recv_pipe": true, 00:19:36.183 "enable_quickack": false, 00:19:36.183 "enable_placement_id": 0, 00:19:36.183 "enable_zerocopy_send_server": true, 00:19:36.183 "enable_zerocopy_send_client": false, 00:19:36.183 "zerocopy_threshold": 0, 00:19:36.183 "tls_version": 0, 00:19:36.183 "enable_ktls": false 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "sock_impl_set_options", 00:19:36.183 "params": { 00:19:36.183 "impl_name": "posix", 00:19:36.183 "recv_buf_size": 2097152, 00:19:36.183 "send_buf_size": 2097152, 00:19:36.183 "enable_recv_pipe": true, 00:19:36.183 "enable_quickack": false, 00:19:36.183 "enable_placement_id": 0, 00:19:36.183 "enable_zerocopy_send_server": true, 00:19:36.183 "enable_zerocopy_send_client": false, 00:19:36.183 "zerocopy_threshold": 0, 00:19:36.183 "tls_version": 0, 00:19:36.183 "enable_ktls": false 00:19:36.183 } 00:19:36.183 } 00:19:36.183 ] 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "subsystem": "vmd", 00:19:36.183 "config": [] 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "subsystem": "accel", 00:19:36.183 "config": [ 00:19:36.183 { 00:19:36.183 "method": "accel_set_options", 00:19:36.183 "params": { 00:19:36.183 "small_cache_size": 128, 00:19:36.183 "large_cache_size": 16, 00:19:36.183 "task_count": 2048, 00:19:36.183 "sequence_count": 2048, 00:19:36.183 "buf_count": 2048 00:19:36.183 } 00:19:36.183 } 00:19:36.183 ] 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "subsystem": "bdev", 00:19:36.183 "config": [ 00:19:36.183 { 00:19:36.183 "method": "bdev_set_options", 00:19:36.183 "params": { 00:19:36.183 "bdev_io_pool_size": 65535, 00:19:36.183 "bdev_io_cache_size": 256, 00:19:36.183 "bdev_auto_examine": true, 00:19:36.183 "iobuf_small_cache_size": 128, 00:19:36.183 "iobuf_large_cache_size": 16 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "bdev_raid_set_options", 00:19:36.183 "params": { 00:19:36.183 "process_window_size_kb": 1024 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "bdev_iscsi_set_options", 00:19:36.183 "params": { 00:19:36.183 "timeout_sec": 30 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "bdev_nvme_set_options", 00:19:36.183 "params": { 00:19:36.183 "action_on_timeout": "none", 00:19:36.183 "timeout_us": 0, 00:19:36.183 "timeout_admin_us": 0, 00:19:36.183 "keep_alive_timeout_ms": 10000, 00:19:36.183 "arbitration_burst": 0, 00:19:36.183 "low_priority_weight": 0, 00:19:36.183 "medium_priority_weight": 0, 00:19:36.183 "high_priority_weight": 0, 00:19:36.183 "nvme_adminq_poll_period_us": 10000, 00:19:36.183 "nvme_ioq_poll_period_us": 0, 00:19:36.183 "io_queue_requests": 0, 00:19:36.183 "delay_cmd_submit": true, 00:19:36.183 "transport_retry_count": 4, 00:19:36.183 "bdev_retry_count": 3, 00:19:36.183 "transport_ack_timeout": 0, 00:19:36.183 "ctrlr_loss_timeout_sec": 0, 00:19:36.183 "reconnect_delay_sec": 0, 00:19:36.183 "fast_io_fail_timeout_sec": 0, 00:19:36.183 "disable_auto_failback": false, 00:19:36.183 "generate_uuids": false, 00:19:36.183 "transport_tos": 0, 00:19:36.183 "nvme_error_stat": false, 00:19:36.183 "rdma_srq_size": 0, 00:19:36.183 "io_path_stat": false, 00:19:36.183 "allow_accel_sequence": false, 00:19:36.183 "rdma_max_cq_size": 0, 00:19:36.183 "rdma_cm_event_timeout_ms": 0, 00:19:36.183 "dhchap_digests": [ 00:19:36.183 "sha256", 00:19:36.183 "sha384", 00:19:36.183 "sha512" 00:19:36.183 ], 00:19:36.183 "dhchap_dhgroups": [ 00:19:36.183 "null", 00:19:36.183 "ffdhe2048", 00:19:36.183 "ffdhe3072", 00:19:36.183 "ffdhe4096", 00:19:36.183 "ffdhe6144", 00:19:36.183 "ffdhe8192" 00:19:36.183 ] 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "bdev_nvme_set_hotplug", 00:19:36.183 "params": { 00:19:36.183 "period_us": 100000, 00:19:36.183 "enable": false 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "bdev_malloc_create", 00:19:36.183 "params": { 00:19:36.183 "name": "malloc0", 00:19:36.183 "num_blocks": 8192, 00:19:36.183 "block_size": 4096, 00:19:36.183 "physical_block_size": 4096, 00:19:36.183 "uuid": "f472757b-56c6-44e7-8e3d-cf0d03ebf6a4", 00:19:36.183 "optimal_io_boundary": 0 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "bdev_wait_for_examine" 00:19:36.183 } 00:19:36.183 ] 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "subsystem": "nbd", 00:19:36.183 "config": [] 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "subsystem": "scheduler", 00:19:36.183 "config": [ 00:19:36.183 { 00:19:36.183 "method": "framework_set_scheduler", 00:19:36.183 "params": { 00:19:36.183 "name": "static" 00:19:36.183 } 00:19:36.183 } 00:19:36.183 ] 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "subsystem": "nvmf", 00:19:36.183 "config": [ 00:19:36.183 { 00:19:36.183 "method": "nvmf_set_config", 00:19:36.183 "params": { 00:19:36.183 "discovery_filter": "match_any", 00:19:36.183 "admin_cmd_passthru": { 00:19:36.183 "identify_ctrlr": false 00:19:36.183 } 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "nvmf_set_max_subsystems", 00:19:36.183 "params": { 00:19:36.183 "max_subsystems": 1024 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "nvmf_set_crdt", 00:19:36.183 "params": { 00:19:36.183 "crdt1": 0, 00:19:36.183 "crdt2": 0, 00:19:36.183 "crdt3": 0 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "nvmf_create_transport", 00:19:36.183 "params": { 00:19:36.183 "trtype": "TCP", 00:19:36.183 "max_queue_depth": 128, 00:19:36.183 "max_io_qpairs_per_ctrlr": 127, 00:19:36.183 "in_capsule_data_size": 4096, 00:19:36.183 "max_io_size": 131072, 00:19:36.183 "io_unit_size": 131072, 00:19:36.183 "max_aq_depth": 128, 00:19:36.183 "num_shared_buffers": 511, 00:19:36.183 "buf_cache_size": 4294967295, 00:19:36.183 "dif_insert_or_strip": false, 00:19:36.183 "zcopy": false, 00:19:36.183 "c2h_success": false, 00:19:36.183 "sock_priority": 0, 00:19:36.183 "abort_timeout_sec": 1, 00:19:36.183 "ack_timeout": 0, 00:19:36.183 "data_wr_pool_size": 0 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "nvmf_create_subsystem", 00:19:36.183 "params": { 00:19:36.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.183 "allow_any_host": false, 00:19:36.183 "serial_number": "00000000000000000000", 00:19:36.183 "model_number": "SPDK bdev Controller", 00:19:36.183 "max_namespaces": 32, 00:19:36.183 "min_cntlid": 1, 00:19:36.183 "max_cntlid": 65519, 00:19:36.183 "ana_reporting": false 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "nvmf_subsystem_add_host", 00:19:36.183 "params": { 00:19:36.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.183 "host": "nqn.2016-06.io.spdk:host1", 00:19:36.183 "psk": "key0" 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "nvmf_subsystem_add_ns", 00:19:36.183 "params": { 00:19:36.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.183 "namespace": { 00:19:36.183 "nsid": 1, 00:19:36.183 "bdev_name": "malloc0", 00:19:36.183 "nguid": "F472757B56C644E78E3DCF0D03EBF6A4", 00:19:36.183 "uuid": "f472757b-56c6-44e7-8e3d-cf0d03ebf6a4", 00:19:36.183 "no_auto_visible": false 00:19:36.183 } 00:19:36.183 } 00:19:36.183 }, 00:19:36.183 { 00:19:36.183 "method": "nvmf_subsystem_add_listener", 00:19:36.183 "params": { 00:19:36.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.183 "listen_address": { 00:19:36.183 "trtype": "TCP", 00:19:36.183 "adrfam": "IPv4", 00:19:36.183 "traddr": "10.0.0.2", 00:19:36.183 "trsvcid": "4420" 00:19:36.183 }, 00:19:36.183 "secure_channel": false, 00:19:36.183 "sock_impl": "ssl" 00:19:36.183 } 00:19:36.183 } 00:19:36.183 ] 00:19:36.183 } 00:19:36.183 ] 00:19:36.183 }' 00:19:36.183 20:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:36.183 20:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:36.183 "subsystems": [ 00:19:36.183 { 00:19:36.183 "subsystem": "keyring", 00:19:36.183 "config": [ 00:19:36.183 { 00:19:36.183 "method": "keyring_file_add_key", 00:19:36.183 "params": { 00:19:36.183 "name": "key0", 00:19:36.183 "path": "/tmp/tmp.YP6M1gS6Yz" 00:19:36.183 } 00:19:36.183 } 00:19:36.184 ] 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "subsystem": "iobuf", 00:19:36.184 "config": [ 00:19:36.184 { 00:19:36.184 "method": "iobuf_set_options", 00:19:36.184 "params": { 00:19:36.184 "small_pool_count": 8192, 00:19:36.184 "large_pool_count": 1024, 00:19:36.184 "small_bufsize": 8192, 00:19:36.184 "large_bufsize": 135168 00:19:36.184 } 00:19:36.184 } 00:19:36.184 ] 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "subsystem": "sock", 00:19:36.184 "config": [ 00:19:36.184 { 00:19:36.184 "method": "sock_set_default_impl", 00:19:36.184 "params": { 00:19:36.184 "impl_name": "posix" 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "sock_impl_set_options", 00:19:36.184 "params": { 00:19:36.184 "impl_name": "ssl", 00:19:36.184 "recv_buf_size": 4096, 00:19:36.184 "send_buf_size": 4096, 00:19:36.184 "enable_recv_pipe": true, 00:19:36.184 "enable_quickack": false, 00:19:36.184 "enable_placement_id": 0, 00:19:36.184 "enable_zerocopy_send_server": true, 00:19:36.184 "enable_zerocopy_send_client": false, 00:19:36.184 "zerocopy_threshold": 0, 00:19:36.184 "tls_version": 0, 00:19:36.184 "enable_ktls": false 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "sock_impl_set_options", 00:19:36.184 "params": { 00:19:36.184 "impl_name": "posix", 00:19:36.184 "recv_buf_size": 2097152, 00:19:36.184 "send_buf_size": 2097152, 00:19:36.184 "enable_recv_pipe": true, 00:19:36.184 "enable_quickack": false, 00:19:36.184 "enable_placement_id": 0, 00:19:36.184 "enable_zerocopy_send_server": true, 00:19:36.184 "enable_zerocopy_send_client": false, 00:19:36.184 "zerocopy_threshold": 0, 00:19:36.184 "tls_version": 0, 00:19:36.184 "enable_ktls": false 00:19:36.184 } 00:19:36.184 } 00:19:36.184 ] 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "subsystem": "vmd", 00:19:36.184 "config": [] 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "subsystem": "accel", 00:19:36.184 "config": [ 00:19:36.184 { 00:19:36.184 "method": "accel_set_options", 00:19:36.184 "params": { 00:19:36.184 "small_cache_size": 128, 00:19:36.184 "large_cache_size": 16, 00:19:36.184 "task_count": 2048, 00:19:36.184 "sequence_count": 2048, 00:19:36.184 "buf_count": 2048 00:19:36.184 } 00:19:36.184 } 00:19:36.184 ] 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "subsystem": "bdev", 00:19:36.184 "config": [ 00:19:36.184 { 00:19:36.184 "method": "bdev_set_options", 00:19:36.184 "params": { 00:19:36.184 "bdev_io_pool_size": 65535, 00:19:36.184 "bdev_io_cache_size": 256, 00:19:36.184 "bdev_auto_examine": true, 00:19:36.184 "iobuf_small_cache_size": 128, 00:19:36.184 "iobuf_large_cache_size": 16 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "bdev_raid_set_options", 00:19:36.184 "params": { 00:19:36.184 "process_window_size_kb": 1024 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "bdev_iscsi_set_options", 00:19:36.184 "params": { 00:19:36.184 "timeout_sec": 30 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "bdev_nvme_set_options", 00:19:36.184 "params": { 00:19:36.184 "action_on_timeout": "none", 00:19:36.184 "timeout_us": 0, 00:19:36.184 "timeout_admin_us": 0, 00:19:36.184 "keep_alive_timeout_ms": 10000, 00:19:36.184 "arbitration_burst": 0, 00:19:36.184 "low_priority_weight": 0, 00:19:36.184 "medium_priority_weight": 0, 00:19:36.184 "high_priority_weight": 0, 00:19:36.184 "nvme_adminq_poll_period_us": 10000, 00:19:36.184 "nvme_ioq_poll_period_us": 0, 00:19:36.184 "io_queue_requests": 512, 00:19:36.184 "delay_cmd_submit": true, 00:19:36.184 "transport_retry_count": 4, 00:19:36.184 "bdev_retry_count": 3, 00:19:36.184 "transport_ack_timeout": 0, 00:19:36.184 "ctrlr_loss_timeout_sec": 0, 00:19:36.184 "reconnect_delay_sec": 0, 00:19:36.184 "fast_io_fail_timeout_sec": 0, 00:19:36.184 "disable_auto_failback": false, 00:19:36.184 "generate_uuids": false, 00:19:36.184 "transport_tos": 0, 00:19:36.184 "nvme_error_stat": false, 00:19:36.184 "rdma_srq_size": 0, 00:19:36.184 "io_path_stat": false, 00:19:36.184 "allow_accel_sequence": false, 00:19:36.184 "rdma_max_cq_size": 0, 00:19:36.184 "rdma_cm_event_timeout_ms": 0, 00:19:36.184 "dhchap_digests": [ 00:19:36.184 "sha256", 00:19:36.184 "sha384", 00:19:36.184 "sha512" 00:19:36.184 ], 00:19:36.184 "dhchap_dhgroups": [ 00:19:36.184 "null", 00:19:36.184 "ffdhe2048", 00:19:36.184 "ffdhe3072", 00:19:36.184 "ffdhe4096", 00:19:36.184 "ffdhe6144", 00:19:36.184 "ffdhe8192" 00:19:36.184 ] 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "bdev_nvme_attach_controller", 00:19:36.184 "params": { 00:19:36.184 "name": "nvme0", 00:19:36.184 "trtype": "TCP", 00:19:36.184 "adrfam": "IPv4", 00:19:36.184 "traddr": "10.0.0.2", 00:19:36.184 "trsvcid": "4420", 00:19:36.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.184 "prchk_reftag": false, 00:19:36.184 "prchk_guard": false, 00:19:36.184 "ctrlr_loss_timeout_sec": 0, 00:19:36.184 "reconnect_delay_sec": 0, 00:19:36.184 "fast_io_fail_timeout_sec": 0, 00:19:36.184 "psk": "key0", 00:19:36.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.184 "hdgst": false, 00:19:36.184 "ddgst": false 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "bdev_nvme_set_hotplug", 00:19:36.184 "params": { 00:19:36.184 "period_us": 100000, 00:19:36.184 "enable": false 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "bdev_enable_histogram", 00:19:36.184 "params": { 00:19:36.184 "name": "nvme0n1", 00:19:36.184 "enable": true 00:19:36.184 } 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "method": "bdev_wait_for_examine" 00:19:36.184 } 00:19:36.184 ] 00:19:36.184 }, 00:19:36.184 { 00:19:36.184 "subsystem": "nbd", 00:19:36.184 "config": [] 00:19:36.184 } 00:19:36.184 ] 00:19:36.184 }' 00:19:36.184 20:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2724551 00:19:36.184 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2724551 ']' 00:19:36.184 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2724551 00:19:36.184 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.184 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.184 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2724551 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2724551' 00:19:36.442 killing process with pid 2724551 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2724551 00:19:36.442 Received shutdown signal, test time was about 1.000000 seconds 00:19:36.442 00:19:36.442 Latency(us) 00:19:36.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.442 =================================================================================================================== 00:19:36.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2724551 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2724141 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2724141 ']' 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2724141 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2724141 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2724141' 00:19:36.442 killing process with pid 2724141 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2724141 00:19:36.442 20:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2724141 00:19:36.701 20:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:36.701 20:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.701 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.701 20:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:36.701 "subsystems": [ 00:19:36.701 { 00:19:36.701 "subsystem": "keyring", 00:19:36.701 "config": [ 00:19:36.701 { 00:19:36.701 "method": "keyring_file_add_key", 00:19:36.701 "params": { 00:19:36.701 "name": "key0", 00:19:36.701 "path": "/tmp/tmp.YP6M1gS6Yz" 00:19:36.701 } 00:19:36.701 } 00:19:36.701 ] 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "subsystem": "iobuf", 00:19:36.701 "config": [ 00:19:36.701 { 00:19:36.701 "method": "iobuf_set_options", 00:19:36.701 "params": { 00:19:36.701 "small_pool_count": 8192, 00:19:36.701 "large_pool_count": 1024, 00:19:36.701 "small_bufsize": 8192, 00:19:36.701 "large_bufsize": 135168 00:19:36.701 } 00:19:36.701 } 00:19:36.701 ] 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "subsystem": "sock", 00:19:36.701 "config": [ 00:19:36.701 { 00:19:36.701 "method": "sock_set_default_impl", 00:19:36.701 "params": { 00:19:36.701 "impl_name": "posix" 00:19:36.701 } 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "method": "sock_impl_set_options", 00:19:36.701 "params": { 00:19:36.701 "impl_name": "ssl", 00:19:36.701 "recv_buf_size": 4096, 00:19:36.701 "send_buf_size": 4096, 00:19:36.701 "enable_recv_pipe": true, 00:19:36.701 "enable_quickack": false, 00:19:36.701 "enable_placement_id": 0, 00:19:36.701 "enable_zerocopy_send_server": true, 00:19:36.701 "enable_zerocopy_send_client": false, 00:19:36.701 "zerocopy_threshold": 0, 00:19:36.701 "tls_version": 0, 00:19:36.701 "enable_ktls": false 00:19:36.701 } 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "method": "sock_impl_set_options", 00:19:36.701 "params": { 00:19:36.701 "impl_name": "posix", 00:19:36.701 "recv_buf_size": 2097152, 00:19:36.701 "send_buf_size": 2097152, 00:19:36.701 "enable_recv_pipe": true, 00:19:36.701 "enable_quickack": false, 00:19:36.701 "enable_placement_id": 0, 00:19:36.701 "enable_zerocopy_send_server": true, 00:19:36.701 "enable_zerocopy_send_client": false, 00:19:36.701 "zerocopy_threshold": 0, 00:19:36.701 "tls_version": 0, 00:19:36.701 "enable_ktls": false 00:19:36.701 } 00:19:36.701 } 00:19:36.701 ] 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "subsystem": "vmd", 00:19:36.701 "config": [] 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "subsystem": "accel", 00:19:36.701 "config": [ 00:19:36.701 { 00:19:36.701 "method": "accel_set_options", 00:19:36.701 "params": { 00:19:36.701 "small_cache_size": 128, 00:19:36.701 "large_cache_size": 16, 00:19:36.701 "task_count": 2048, 00:19:36.701 "sequence_count": 2048, 00:19:36.701 "buf_count": 2048 00:19:36.701 } 00:19:36.701 } 00:19:36.701 ] 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "subsystem": "bdev", 00:19:36.701 "config": [ 00:19:36.701 { 00:19:36.701 "method": "bdev_set_options", 00:19:36.701 "params": { 00:19:36.701 "bdev_io_pool_size": 65535, 00:19:36.701 "bdev_io_cache_size": 256, 00:19:36.701 "bdev_auto_examine": true, 00:19:36.701 "iobuf_small_cache_size": 128, 00:19:36.701 "iobuf_large_cache_size": 16 00:19:36.701 } 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "method": "bdev_raid_set_options", 00:19:36.701 "params": { 00:19:36.701 "process_window_size_kb": 1024 00:19:36.701 } 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "method": "bdev_iscsi_set_options", 00:19:36.701 "params": { 00:19:36.701 "timeout_sec": 30 00:19:36.701 } 00:19:36.701 }, 00:19:36.701 { 00:19:36.702 "method": "bdev_nvme_set_options", 00:19:36.702 "params": { 00:19:36.702 "action_on_timeout": "none", 00:19:36.702 "timeout_us": 0, 00:19:36.702 "timeout_admin_us": 0, 00:19:36.702 "keep_alive_timeout_ms": 10000, 00:19:36.702 "arbitration_burst": 0, 00:19:36.702 "low_priority_weight": 0, 00:19:36.702 "medium_priority_weight": 0, 00:19:36.702 "high_priority_weight": 0, 00:19:36.702 "nvme_adminq_poll_period_us": 10000, 00:19:36.702 "nvme_ioq_poll_period_us": 0, 00:19:36.702 "io_queue_requests": 0, 00:19:36.702 "delay_cmd_submit": true, 00:19:36.702 "transport_retry_count": 4, 00:19:36.702 "bdev_retry_count": 3, 00:19:36.702 "transport_ack_timeout": 0, 00:19:36.702 "ctrlr_loss_timeout_sec": 0, 00:19:36.702 "reconnect_delay_sec": 0, 00:19:36.702 "fast_io_fail_timeout_sec": 0, 00:19:36.702 "disable_auto_failback": false, 00:19:36.702 "generate_uuids": false, 00:19:36.702 "transport_tos": 0, 00:19:36.702 "nvme_error_stat": false, 00:19:36.702 "rdma_srq_size": 0, 00:19:36.702 "io_path_stat": false, 00:19:36.702 "allow_accel_sequence": false, 00:19:36.702 "rdma_max_cq_size": 0, 00:19:36.702 "rdma_cm_event_timeout_ms": 0, 00:19:36.702 "dhchap_digests": [ 00:19:36.702 "sha256", 00:19:36.702 "sha384", 00:19:36.702 "sha512" 00:19:36.702 ], 00:19:36.702 "dhchap_dhgroups": [ 00:19:36.702 "null", 00:19:36.702 "ffdhe2048", 00:19:36.702 "ffdhe3072", 00:19:36.702 "ffdhe4096", 00:19:36.702 "ffdhe6144", 00:19:36.702 "ffdhe8192" 00:19:36.702 ] 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "bdev_nvme_set_hotplug", 00:19:36.702 "params": { 00:19:36.702 "period_us": 100000, 00:19:36.702 "enable": false 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "bdev_malloc_create", 00:19:36.702 "params": { 00:19:36.702 "name": "malloc0", 00:19:36.702 "num_blocks": 8192, 00:19:36.702 "block_size": 4096, 00:19:36.702 "physical_block_size": 4096, 00:19:36.702 "uuid": "f472757b-56c6-44e7-8e3d-cf0d03ebf6a4", 00:19:36.702 "optimal_io_boundary": 0 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "bdev_wait_for_examine" 00:19:36.702 } 00:19:36.702 ] 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "subsystem": "nbd", 00:19:36.702 "config": [] 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "subsystem": "scheduler", 00:19:36.702 "config": [ 00:19:36.702 { 00:19:36.702 "method": "framework_set_scheduler", 00:19:36.702 "params": { 00:19:36.702 "name": "static" 00:19:36.702 } 00:19:36.702 } 00:19:36.702 ] 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "subsystem": "nvmf", 00:19:36.702 "config": [ 00:19:36.702 { 00:19:36.702 "method": "nvmf_set_config", 00:19:36.702 "params": { 00:19:36.702 "discovery_filter": "match_any", 00:19:36.702 "admin_cmd_passthru": { 00:19:36.702 "identify_ctrlr": false 00:19:36.702 } 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "nvmf_set_max_subsystems", 00:19:36.702 "params": { 00:19:36.702 "max_subsystems": 1024 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "nvmf_set_crdt", 00:19:36.702 "params": { 00:19:36.702 "crdt1": 0, 00:19:36.702 "crdt2": 0, 00:19:36.702 "crdt3": 0 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "nvmf_create_transport", 00:19:36.702 "params": { 00:19:36.702 "trtype": "TCP", 00:19:36.702 "max_queue_depth": 128, 00:19:36.702 "max_io_qpairs_per_ctrlr": 127, 00:19:36.702 "in_capsule_data_size": 4096, 00:19:36.702 "max_io_size": 131072, 00:19:36.702 "io_unit_size": 131072, 00:19:36.702 "max_aq_depth": 128, 00:19:36.702 "num_shared_buffers": 511, 00:19:36.702 "buf_cache_size": 4294967295, 00:19:36.702 "dif_insert_or_strip": false, 00:19:36.702 "zcopy": false, 00:19:36.702 "c2h_success": false, 00:19:36.702 "sock_priority": 0, 00:19:36.702 "abort_timeout_sec": 1, 00:19:36.702 "ack_timeout": 0, 00:19:36.702 "data_wr_pool_size": 0 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "nvmf_create_subsystem", 00:19:36.702 "params": { 00:19:36.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.702 "allow_any_host": false, 00:19:36.702 "serial_number": "00000000000000000000", 00:19:36.702 "model_number": "SPDK bdev Controller", 00:19:36.702 "max_namespaces": 32, 00:19:36.702 "min_cntlid": 1, 00:19:36.702 "max_cntlid": 65519, 00:19:36.702 "ana_reporting": false 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "nvmf_subsystem_add_host", 00:19:36.702 "params": { 00:19:36.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.702 "host": "nqn.2016-06.io.spdk:host1", 00:19:36.702 "psk": "key0" 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "nvmf_subsystem_add_ns", 00:19:36.702 "params": { 00:19:36.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.702 "namespace": { 00:19:36.702 "nsid": 1, 00:19:36.702 "bdev_name": "malloc0", 00:19:36.702 "nguid": "F472757B56C644E78E3DCF0D03EBF6A4", 00:19:36.702 "uuid": "f472757b-56c6-44e7-8e3d-cf0d03ebf6a4", 00:19:36.702 "no_auto_visible": false 00:19:36.702 } 00:19:36.702 } 00:19:36.702 }, 00:19:36.702 { 00:19:36.702 "method": "nvmf_subsystem_add_listener", 00:19:36.702 "params": { 00:19:36.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.702 "listen_address": { 00:19:36.702 "trtype": "TCP", 00:19:36.702 "adrfam": "IPv4", 00:19:36.702 "traddr": "10.0.0.2", 00:19:36.702 "trsvcid": "4420" 00:19:36.702 }, 00:19:36.702 "secure_channel": false, 00:19:36.702 "sock_impl": "ssl" 00:19:36.702 } 00:19:36.702 } 00:19:36.702 ] 00:19:36.702 } 00:19:36.702 ] 00:19:36.702 }' 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2725199 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2725199 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2725199 ']' 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.702 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.702 [2024-07-15 20:45:11.164503] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:36.702 [2024-07-15 20:45:11.164549] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.962 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.962 [2024-07-15 20:45:11.221997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.962 [2024-07-15 20:45:11.292111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.962 [2024-07-15 20:45:11.292152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.962 [2024-07-15 20:45:11.292159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.962 [2024-07-15 20:45:11.292168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.962 [2024-07-15 20:45:11.292173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.962 [2024-07-15 20:45:11.292233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.221 [2024-07-15 20:45:11.501780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.221 [2024-07-15 20:45:11.533812] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.221 [2024-07-15 20:45:11.542555] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.480 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.480 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:37.480 20:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.480 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.480 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2725280 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2725280 /var/tmp/bdevperf.sock 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2725280 ']' 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.739 20:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:37.739 "subsystems": [ 00:19:37.739 { 00:19:37.739 "subsystem": "keyring", 00:19:37.739 "config": [ 00:19:37.739 { 00:19:37.739 "method": "keyring_file_add_key", 00:19:37.739 "params": { 00:19:37.739 "name": "key0", 00:19:37.739 "path": "/tmp/tmp.YP6M1gS6Yz" 00:19:37.739 } 00:19:37.739 } 00:19:37.739 ] 00:19:37.739 }, 00:19:37.739 { 00:19:37.739 "subsystem": "iobuf", 00:19:37.739 "config": [ 00:19:37.739 { 00:19:37.739 "method": "iobuf_set_options", 00:19:37.739 "params": { 00:19:37.739 "small_pool_count": 8192, 00:19:37.739 "large_pool_count": 1024, 00:19:37.739 "small_bufsize": 8192, 00:19:37.739 "large_bufsize": 135168 00:19:37.739 } 00:19:37.739 } 00:19:37.739 ] 00:19:37.739 }, 00:19:37.739 { 00:19:37.739 "subsystem": "sock", 00:19:37.739 "config": [ 00:19:37.739 { 00:19:37.739 "method": "sock_set_default_impl", 00:19:37.739 "params": { 00:19:37.739 "impl_name": "posix" 00:19:37.739 } 00:19:37.739 }, 00:19:37.739 { 00:19:37.739 "method": "sock_impl_set_options", 00:19:37.739 "params": { 00:19:37.739 "impl_name": "ssl", 00:19:37.739 "recv_buf_size": 4096, 00:19:37.739 "send_buf_size": 4096, 00:19:37.739 "enable_recv_pipe": true, 00:19:37.739 "enable_quickack": false, 00:19:37.739 "enable_placement_id": 0, 00:19:37.739 "enable_zerocopy_send_server": true, 00:19:37.739 "enable_zerocopy_send_client": false, 00:19:37.739 "zerocopy_threshold": 0, 00:19:37.739 "tls_version": 0, 00:19:37.739 "enable_ktls": false 00:19:37.739 } 00:19:37.739 }, 00:19:37.739 { 00:19:37.739 "method": "sock_impl_set_options", 00:19:37.739 "params": { 00:19:37.739 "impl_name": "posix", 00:19:37.739 "recv_buf_size": 2097152, 00:19:37.739 "send_buf_size": 2097152, 00:19:37.739 "enable_recv_pipe": true, 00:19:37.739 "enable_quickack": false, 00:19:37.739 "enable_placement_id": 0, 00:19:37.739 "enable_zerocopy_send_server": true, 00:19:37.739 "enable_zerocopy_send_client": false, 00:19:37.739 "zerocopy_threshold": 0, 00:19:37.739 "tls_version": 0, 00:19:37.739 "enable_ktls": false 00:19:37.739 } 00:19:37.739 } 00:19:37.739 ] 00:19:37.739 }, 00:19:37.739 { 00:19:37.739 "subsystem": "vmd", 00:19:37.739 "config": [] 00:19:37.739 }, 00:19:37.739 { 00:19:37.739 "subsystem": "accel", 00:19:37.739 "config": [ 00:19:37.739 { 00:19:37.739 "method": "accel_set_options", 00:19:37.739 "params": { 00:19:37.740 "small_cache_size": 128, 00:19:37.740 "large_cache_size": 16, 00:19:37.740 "task_count": 2048, 00:19:37.740 "sequence_count": 2048, 00:19:37.740 "buf_count": 2048 00:19:37.740 } 00:19:37.740 } 00:19:37.740 ] 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "subsystem": "bdev", 00:19:37.740 "config": [ 00:19:37.740 { 00:19:37.740 "method": "bdev_set_options", 00:19:37.740 "params": { 00:19:37.740 "bdev_io_pool_size": 65535, 00:19:37.740 "bdev_io_cache_size": 256, 00:19:37.740 "bdev_auto_examine": true, 00:19:37.740 "iobuf_small_cache_size": 128, 00:19:37.740 "iobuf_large_cache_size": 16 00:19:37.740 } 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "method": "bdev_raid_set_options", 00:19:37.740 "params": { 00:19:37.740 "process_window_size_kb": 1024 00:19:37.740 } 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "method": "bdev_iscsi_set_options", 00:19:37.740 "params": { 00:19:37.740 "timeout_sec": 30 00:19:37.740 } 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "method": "bdev_nvme_set_options", 00:19:37.740 "params": { 00:19:37.740 "action_on_timeout": "none", 00:19:37.740 "timeout_us": 0, 00:19:37.740 "timeout_admin_us": 0, 00:19:37.740 "keep_alive_timeout_ms": 10000, 00:19:37.740 "arbitration_burst": 0, 00:19:37.740 "low_priority_weight": 0, 00:19:37.740 "medium_priority_weight": 0, 00:19:37.740 "high_priority_weight": 0, 00:19:37.740 "nvme_adminq_poll_period_us": 10000, 00:19:37.740 "nvme_ioq_poll_period_us": 0, 00:19:37.740 "io_queue_requests": 512, 00:19:37.740 "delay_cmd_submit": true, 00:19:37.740 "transport_retry_count": 4, 00:19:37.740 "bdev_retry_count": 3, 00:19:37.740 "transport_ack_timeout": 0, 00:19:37.740 "ctrlr_loss_timeout_sec": 0, 00:19:37.740 "reconnect_delay_sec": 0, 00:19:37.740 "fast_io_fail_timeout_sec": 0, 00:19:37.740 "disable_auto_failback": false, 00:19:37.740 "generate_uuids": false, 00:19:37.740 "transport_tos": 0, 00:19:37.740 "nvme_error_stat": false, 00:19:37.740 "rdma_srq_size": 0, 00:19:37.740 "io_path_stat": false, 00:19:37.740 "allow_accel_sequence": false, 00:19:37.740 "rdma_max_cq_size": 0, 00:19:37.740 "rdma_cm_event_timeout_ms": 0, 00:19:37.740 "dhchap_digests": [ 00:19:37.740 "sha256", 00:19:37.740 "sha384", 00:19:37.740 "sha512" 00:19:37.740 ], 00:19:37.740 "dhchap_dhgroups": [ 00:19:37.740 "null", 00:19:37.740 "ffdhe2048", 00:19:37.740 "ffdhe3072", 00:19:37.740 "ffdhe4096", 00:19:37.740 "ffdhe6144", 00:19:37.740 "ffdhe8192" 00:19:37.740 ] 00:19:37.740 } 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "method": "bdev_nvme_attach_controller", 00:19:37.740 "params": { 00:19:37.740 "name": "nvme0", 00:19:37.740 "trtype": "TCP", 00:19:37.740 "adrfam": "IPv4", 00:19:37.740 "traddr": "10.0.0.2", 00:19:37.740 "trsvcid": "4420", 00:19:37.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.740 "prchk_reftag": false, 00:19:37.740 "prchk_guard": false, 00:19:37.740 "ctrlr_loss_timeout_sec": 0, 00:19:37.740 "reconnect_delay_sec": 0, 00:19:37.740 "fast_io_fail_timeout_sec": 0, 00:19:37.740 "psk": "key0", 00:19:37.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.740 "hdgst": false, 00:19:37.740 "ddgst": false 00:19:37.740 } 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "method": "bdev_nvme_set_hotplug", 00:19:37.740 "params": { 00:19:37.740 "period_us": 100000, 00:19:37.740 "enable": false 00:19:37.740 } 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "method": "bdev_enable_histogram", 00:19:37.740 "params": { 00:19:37.740 "name": "nvme0n1", 00:19:37.740 "enable": true 00:19:37.740 } 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "method": "bdev_wait_for_examine" 00:19:37.740 } 00:19:37.740 ] 00:19:37.740 }, 00:19:37.740 { 00:19:37.740 "subsystem": "nbd", 00:19:37.740 "config": [] 00:19:37.740 } 00:19:37.740 ] 00:19:37.740 }' 00:19:37.740 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.740 20:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.740 [2024-07-15 20:45:12.038964] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:37.740 [2024-07-15 20:45:12.039013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725280 ] 00:19:37.740 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.740 [2024-07-15 20:45:12.092778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.740 [2024-07-15 20:45:12.165480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.999 [2024-07-15 20:45:12.316689] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.565 20:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.565 20:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:38.565 20:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:38.565 20:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:38.566 20:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.566 20:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.823 Running I/O for 1 seconds... 00:19:39.758 00:19:39.758 Latency(us) 00:19:39.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.758 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:39.758 Verification LBA range: start 0x0 length 0x2000 00:19:39.758 nvme0n1 : 1.02 5283.54 20.64 0.00 0.00 24030.04 5841.25 76591.64 00:19:39.758 =================================================================================================================== 00:19:39.758 Total : 5283.54 20.64 0.00 0.00 24030.04 5841.25 76591.64 00:19:39.758 0 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:39.758 nvmf_trace.0 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2725280 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2725280 ']' 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2725280 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2725280 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2725280' 00:19:39.758 killing process with pid 2725280 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2725280 00:19:39.758 Received shutdown signal, test time was about 1.000000 seconds 00:19:39.758 00:19:39.758 Latency(us) 00:19:39.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.758 =================================================================================================================== 00:19:39.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.758 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2725280 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.017 rmmod nvme_tcp 00:19:40.017 rmmod nvme_fabrics 00:19:40.017 rmmod nvme_keyring 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2725199 ']' 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2725199 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2725199 ']' 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2725199 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.017 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2725199 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2725199' 00:19:40.277 killing process with pid 2725199 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2725199 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2725199 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.277 20:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.813 20:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:42.813 20:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MjBY9O4EsQ /tmp/tmp.zavGk8KZe4 /tmp/tmp.YP6M1gS6Yz 00:19:42.813 00:19:42.813 real 1m22.622s 00:19:42.813 user 2m8.283s 00:19:42.813 sys 0m27.110s 00:19:42.813 20:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:42.813 20:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.813 ************************************ 00:19:42.813 END TEST nvmf_tls 00:19:42.813 ************************************ 00:19:42.813 20:45:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:42.813 20:45:16 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:42.813 20:45:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:42.813 20:45:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.813 20:45:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:42.813 ************************************ 00:19:42.813 START TEST nvmf_fips 00:19:42.813 ************************************ 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:42.813 * Looking for test storage... 00:19:42.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:42.813 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:42.814 20:45:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:42.814 Error setting digest 00:19:42.814 0062DEE3DC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:42.814 0062DEE3DC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.814 20:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.108 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.109 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.109 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.109 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:48.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:19:48.109 00:19:48.109 --- 10.0.0.2 ping statistics --- 00:19:48.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.109 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:19:48.109 00:19:48.109 --- 10.0.0.1 ping statistics --- 00:19:48.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.109 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2729265 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2729265 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2729265 ']' 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.109 20:45:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.109 [2024-07-15 20:45:22.427377] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:48.109 [2024-07-15 20:45:22.427424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.109 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.109 [2024-07-15 20:45:22.483443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.109 [2024-07-15 20:45:22.554782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.109 [2024-07-15 20:45:22.554825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.109 [2024-07-15 20:45:22.554833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.109 [2024-07-15 20:45:22.554839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.109 [2024-07-15 20:45:22.554844] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.109 [2024-07-15 20:45:22.554862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:49.045 [2024-07-15 20:45:23.389750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.045 [2024-07-15 20:45:23.405766] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.045 [2024-07-15 20:45:23.405951] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.045 [2024-07-15 20:45:23.434057] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:49.045 malloc0 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2729340 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2729340 /var/tmp/bdevperf.sock 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2729340 ']' 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.045 20:45:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:49.045 [2024-07-15 20:45:23.516639] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:19:49.045 [2024-07-15 20:45:23.516690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729340 ] 00:19:49.356 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.356 [2024-07-15 20:45:23.567690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.356 [2024-07-15 20:45:23.640677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.923 20:45:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.923 20:45:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:49.923 20:45:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:50.181 [2024-07-15 20:45:24.455651] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.181 [2024-07-15 20:45:24.455737] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:50.181 TLSTESTn1 00:19:50.181 20:45:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:50.181 Running I/O for 10 seconds... 00:20:02.387 00:20:02.387 Latency(us) 00:20:02.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.387 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.387 Verification LBA range: start 0x0 length 0x2000 00:20:02.387 TLSTESTn1 : 10.02 5411.12 21.14 0.00 0.00 23617.14 6952.51 43538.70 00:20:02.387 =================================================================================================================== 00:20:02.387 Total : 5411.12 21.14 0.00 0.00 23617.14 6952.51 43538.70 00:20:02.387 0 00:20:02.387 20:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:02.388 nvmf_trace.0 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2729340 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2729340 ']' 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2729340 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2729340 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2729340' 00:20:02.388 killing process with pid 2729340 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2729340 00:20:02.388 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.388 00:20:02.388 Latency(us) 00:20:02.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.388 =================================================================================================================== 00:20:02.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.388 [2024-07-15 20:45:34.802674] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2729340 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.388 20:45:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.388 rmmod nvme_tcp 00:20:02.388 rmmod nvme_fabrics 00:20:02.388 rmmod nvme_keyring 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2729265 ']' 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2729265 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2729265 ']' 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2729265 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2729265 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2729265' 00:20:02.388 killing process with pid 2729265 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2729265 00:20:02.388 [2024-07-15 20:45:35.096407] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2729265 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.388 20:45:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.957 20:45:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.957 20:45:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:02.957 00:20:02.957 real 0m20.507s 00:20:02.957 user 0m22.761s 00:20:02.957 sys 0m8.501s 00:20:02.957 20:45:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.957 20:45:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:02.957 ************************************ 00:20:02.957 END TEST nvmf_fips 00:20:02.957 ************************************ 00:20:02.957 20:45:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:02.957 20:45:37 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:02.957 20:45:37 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:02.957 20:45:37 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:02.957 20:45:37 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:02.957 20:45:37 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.957 20:45:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:08.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:08.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:08.230 Found net devices under 0000:86:00.0: cvl_0_0 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:08.230 Found net devices under 0000:86:00.1: cvl_0_1 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:08.230 20:45:42 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:08.230 20:45:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:08.230 20:45:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.230 20:45:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:08.230 ************************************ 00:20:08.230 START TEST nvmf_perf_adq 00:20:08.230 ************************************ 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:08.230 * Looking for test storage... 00:20:08.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:08.230 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.231 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.231 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.231 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:08.231 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:08.231 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:08.231 20:45:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:08.231 20:45:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:08.231 20:45:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:13.502 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:13.502 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:13.502 Found net devices under 0000:86:00.0: cvl_0_0 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.502 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:13.503 Found net devices under 0000:86:00.1: cvl_0_1 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:13.503 20:45:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:14.070 20:45:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:16.605 20:45:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:21.873 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:21.873 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.873 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:21.874 Found net devices under 0000:86:00.0: cvl_0_0 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:21.874 Found net devices under 0000:86:00.1: cvl_0_1 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:20:21.874 00:20:21.874 --- 10.0.0.2 ping statistics --- 00:20:21.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.874 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:20:21.874 00:20:21.874 --- 10.0.0.1 ping statistics --- 00:20:21.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.874 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2739209 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2739209 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2739209 ']' 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.874 20:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 [2024-07-15 20:45:55.846967] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:21.874 [2024-07-15 20:45:55.847013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.874 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.874 [2024-07-15 20:45:55.904257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.874 [2024-07-15 20:45:55.986455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.874 [2024-07-15 20:45:55.986489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.874 [2024-07-15 20:45:55.986496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.874 [2024-07-15 20:45:55.986502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.874 [2024-07-15 20:45:55.986507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.874 [2024-07-15 20:45:55.986551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.874 [2024-07-15 20:45:55.986648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.874 [2024-07-15 20:45:55.986666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.874 [2024-07-15 20:45:55.986667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 [2024-07-15 20:45:56.844393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 Malloc1 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 [2024-07-15 20:45:56.896210] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2739291 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:22.441 20:45:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:22.731 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:24.627 "tick_rate": 2300000000, 00:20:24.627 "poll_groups": [ 00:20:24.627 { 00:20:24.627 "name": "nvmf_tgt_poll_group_000", 00:20:24.627 "admin_qpairs": 1, 00:20:24.627 "io_qpairs": 1, 00:20:24.627 "current_admin_qpairs": 1, 00:20:24.627 "current_io_qpairs": 1, 00:20:24.627 "pending_bdev_io": 0, 00:20:24.627 "completed_nvme_io": 20323, 00:20:24.627 "transports": [ 00:20:24.627 { 00:20:24.627 "trtype": "TCP" 00:20:24.627 } 00:20:24.627 ] 00:20:24.627 }, 00:20:24.627 { 00:20:24.627 "name": "nvmf_tgt_poll_group_001", 00:20:24.627 "admin_qpairs": 0, 00:20:24.627 "io_qpairs": 1, 00:20:24.627 "current_admin_qpairs": 0, 00:20:24.627 "current_io_qpairs": 1, 00:20:24.627 "pending_bdev_io": 0, 00:20:24.627 "completed_nvme_io": 20570, 00:20:24.627 "transports": [ 00:20:24.627 { 00:20:24.627 "trtype": "TCP" 00:20:24.627 } 00:20:24.627 ] 00:20:24.627 }, 00:20:24.627 { 00:20:24.627 "name": "nvmf_tgt_poll_group_002", 00:20:24.627 "admin_qpairs": 0, 00:20:24.627 "io_qpairs": 1, 00:20:24.627 "current_admin_qpairs": 0, 00:20:24.627 "current_io_qpairs": 1, 00:20:24.627 "pending_bdev_io": 0, 00:20:24.627 "completed_nvme_io": 20487, 00:20:24.627 "transports": [ 00:20:24.627 { 00:20:24.627 "trtype": "TCP" 00:20:24.627 } 00:20:24.627 ] 00:20:24.627 }, 00:20:24.627 { 00:20:24.627 "name": "nvmf_tgt_poll_group_003", 00:20:24.627 "admin_qpairs": 0, 00:20:24.627 "io_qpairs": 1, 00:20:24.627 "current_admin_qpairs": 0, 00:20:24.627 "current_io_qpairs": 1, 00:20:24.627 "pending_bdev_io": 0, 00:20:24.627 "completed_nvme_io": 20182, 00:20:24.627 "transports": [ 00:20:24.627 { 00:20:24.627 "trtype": "TCP" 00:20:24.627 } 00:20:24.627 ] 00:20:24.627 } 00:20:24.627 ] 00:20:24.627 }' 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:24.627 20:45:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2739291 00:20:32.741 Initializing NVMe Controllers 00:20:32.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:32.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:32.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:32.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:32.741 Initialization complete. Launching workers. 00:20:32.741 ======================================================== 00:20:32.741 Latency(us) 00:20:32.741 Device Information : IOPS MiB/s Average min max 00:20:32.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10788.31 42.14 5934.11 2602.20 9626.73 00:20:32.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10964.81 42.83 5838.11 1901.30 9371.10 00:20:32.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10905.81 42.60 5869.36 2374.48 10121.68 00:20:32.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10835.01 42.32 5907.85 2089.12 10395.36 00:20:32.741 ======================================================== 00:20:32.741 Total : 43493.95 169.90 5887.13 1901.30 10395.36 00:20:32.741 00:20:32.741 20:46:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:32.741 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.741 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:32.741 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.741 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:32.741 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.742 rmmod nvme_tcp 00:20:32.742 rmmod nvme_fabrics 00:20:32.742 rmmod nvme_keyring 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2739209 ']' 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2739209 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2739209 ']' 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2739209 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739209 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739209' 00:20:32.742 killing process with pid 2739209 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2739209 00:20:32.742 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2739209 00:20:33.001 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.001 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.001 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.001 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.001 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.001 20:46:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.001 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.001 20:46:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.535 20:46:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.535 20:46:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:35.535 20:46:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:36.102 20:46:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:38.633 20:46:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:43.903 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:43.903 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.903 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:43.904 Found net devices under 0000:86:00.0: cvl_0_0 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:43.904 Found net devices under 0000:86:00.1: cvl_0_1 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:43.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:20:43.904 00:20:43.904 --- 10.0.0.2 ping statistics --- 00:20:43.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.904 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:20:43.904 00:20:43.904 --- 10.0.0.1 ping statistics --- 00:20:43.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.904 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:43.904 net.core.busy_poll = 1 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:43.904 net.core.busy_read = 1 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:43.904 20:46:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2743050 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2743050 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2743050 ']' 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.904 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 [2024-07-15 20:46:18.124393] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:20:43.904 [2024-07-15 20:46:18.124449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.904 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.904 [2024-07-15 20:46:18.182856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.904 [2024-07-15 20:46:18.266290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.904 [2024-07-15 20:46:18.266320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.904 [2024-07-15 20:46:18.266328] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.904 [2024-07-15 20:46:18.266334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.904 [2024-07-15 20:46:18.266339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.904 [2024-07-15 20:46:18.266385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.904 [2024-07-15 20:46:18.266398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.904 [2024-07-15 20:46:18.266490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.904 [2024-07-15 20:46:18.266491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.471 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.471 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:44.471 20:46:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.471 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.471 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 20:46:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.730 20:46:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:44.730 20:46:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:44.730 20:46:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:44.730 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.730 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 20:46:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 [2024-07-15 20:46:19.108842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 Malloc1 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.730 [2024-07-15 20:46:19.156323] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2743303 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:44.730 20:46:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:44.730 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:47.264 "tick_rate": 2300000000, 00:20:47.264 "poll_groups": [ 00:20:47.264 { 00:20:47.264 "name": "nvmf_tgt_poll_group_000", 00:20:47.264 "admin_qpairs": 1, 00:20:47.264 "io_qpairs": 2, 00:20:47.264 "current_admin_qpairs": 1, 00:20:47.264 "current_io_qpairs": 2, 00:20:47.264 "pending_bdev_io": 0, 00:20:47.264 "completed_nvme_io": 29025, 00:20:47.264 "transports": [ 00:20:47.264 { 00:20:47.264 "trtype": "TCP" 00:20:47.264 } 00:20:47.264 ] 00:20:47.264 }, 00:20:47.264 { 00:20:47.264 "name": "nvmf_tgt_poll_group_001", 00:20:47.264 "admin_qpairs": 0, 00:20:47.264 "io_qpairs": 2, 00:20:47.264 "current_admin_qpairs": 0, 00:20:47.264 "current_io_qpairs": 2, 00:20:47.264 "pending_bdev_io": 0, 00:20:47.264 "completed_nvme_io": 29232, 00:20:47.264 "transports": [ 00:20:47.264 { 00:20:47.264 "trtype": "TCP" 00:20:47.264 } 00:20:47.264 ] 00:20:47.264 }, 00:20:47.264 { 00:20:47.264 "name": "nvmf_tgt_poll_group_002", 00:20:47.264 "admin_qpairs": 0, 00:20:47.264 "io_qpairs": 0, 00:20:47.264 "current_admin_qpairs": 0, 00:20:47.264 "current_io_qpairs": 0, 00:20:47.264 "pending_bdev_io": 0, 00:20:47.264 "completed_nvme_io": 0, 00:20:47.264 "transports": [ 00:20:47.264 { 00:20:47.264 "trtype": "TCP" 00:20:47.264 } 00:20:47.264 ] 00:20:47.264 }, 00:20:47.264 { 00:20:47.264 "name": "nvmf_tgt_poll_group_003", 00:20:47.264 "admin_qpairs": 0, 00:20:47.264 "io_qpairs": 0, 00:20:47.264 "current_admin_qpairs": 0, 00:20:47.264 "current_io_qpairs": 0, 00:20:47.264 "pending_bdev_io": 0, 00:20:47.264 "completed_nvme_io": 0, 00:20:47.264 "transports": [ 00:20:47.264 { 00:20:47.264 "trtype": "TCP" 00:20:47.264 } 00:20:47.264 ] 00:20:47.264 } 00:20:47.264 ] 00:20:47.264 }' 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:47.264 20:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2743303 00:20:55.377 Initializing NVMe Controllers 00:20:55.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:55.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:55.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:55.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:55.377 Initialization complete. Launching workers. 00:20:55.377 ======================================================== 00:20:55.377 Latency(us) 00:20:55.377 Device Information : IOPS MiB/s Average min max 00:20:55.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7530.70 29.42 8509.44 1509.50 52565.01 00:20:55.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9001.40 35.16 7109.23 1287.73 52579.95 00:20:55.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6656.90 26.00 9641.44 1360.40 55101.53 00:20:55.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7725.50 30.18 8308.63 1501.73 53006.20 00:20:55.377 ======================================================== 00:20:55.377 Total : 30914.50 120.76 8295.31 1287.73 55101.53 00:20:55.377 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.377 rmmod nvme_tcp 00:20:55.377 rmmod nvme_fabrics 00:20:55.377 rmmod nvme_keyring 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2743050 ']' 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2743050 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2743050 ']' 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2743050 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2743050 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2743050' 00:20:55.377 killing process with pid 2743050 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2743050 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2743050 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.377 20:46:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.710 20:46:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:58.710 20:46:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:58.710 00:20:58.710 real 0m50.323s 00:20:58.710 user 2m49.548s 00:20:58.710 sys 0m9.066s 00:20:58.710 20:46:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:58.710 20:46:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.710 ************************************ 00:20:58.710 END TEST nvmf_perf_adq 00:20:58.710 ************************************ 00:20:58.710 20:46:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:58.710 20:46:32 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:58.710 20:46:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:58.710 20:46:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.710 20:46:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:58.710 ************************************ 00:20:58.710 START TEST nvmf_shutdown 00:20:58.710 ************************************ 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:58.710 * Looking for test storage... 00:20:58.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.710 20:46:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:58.711 ************************************ 00:20:58.711 START TEST nvmf_shutdown_tc1 00:20:58.711 ************************************ 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:58.711 20:46:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:03.985 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:03.985 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:03.985 Found net devices under 0000:86:00.0: cvl_0_0 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.985 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:03.985 Found net devices under 0000:86:00.1: cvl_0_1 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:03.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:03.986 00:21:03.986 --- 10.0.0.2 ping statistics --- 00:21:03.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.986 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:21:03.986 00:21:03.986 --- 10.0.0.1 ping statistics --- 00:21:03.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.986 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2748652 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2748652 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2748652 ']' 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.986 20:46:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.245 [2024-07-15 20:46:38.493079] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:04.245 [2024-07-15 20:46:38.493123] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.245 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.245 [2024-07-15 20:46:38.549877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.245 [2024-07-15 20:46:38.632201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.245 [2024-07-15 20:46:38.632234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.245 [2024-07-15 20:46:38.632242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.245 [2024-07-15 20:46:38.632247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.245 [2024-07-15 20:46:38.632253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.245 [2024-07-15 20:46:38.632286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.245 [2024-07-15 20:46:38.632306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.245 [2024-07-15 20:46:38.632415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.245 [2024-07-15 20:46:38.632416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 [2024-07-15 20:46:39.348192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.182 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 Malloc1 00:21:05.182 [2024-07-15 20:46:39.444120] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.182 Malloc2 00:21:05.182 Malloc3 00:21:05.182 Malloc4 00:21:05.182 Malloc5 00:21:05.182 Malloc6 00:21:05.442 Malloc7 00:21:05.442 Malloc8 00:21:05.442 Malloc9 00:21:05.442 Malloc10 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2748946 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2748946 /var/tmp/bdevperf.sock 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2748946 ']' 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.442 { 00:21:05.442 "params": { 00:21:05.442 "name": "Nvme$subsystem", 00:21:05.442 "trtype": "$TEST_TRANSPORT", 00:21:05.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.442 "adrfam": "ipv4", 00:21:05.442 "trsvcid": "$NVMF_PORT", 00:21:05.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.442 "hdgst": ${hdgst:-false}, 00:21:05.442 "ddgst": ${ddgst:-false} 00:21:05.442 }, 00:21:05.442 "method": "bdev_nvme_attach_controller" 00:21:05.442 } 00:21:05.442 EOF 00:21:05.442 )") 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.442 { 00:21:05.442 "params": { 00:21:05.442 "name": "Nvme$subsystem", 00:21:05.442 "trtype": "$TEST_TRANSPORT", 00:21:05.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.442 "adrfam": "ipv4", 00:21:05.442 "trsvcid": "$NVMF_PORT", 00:21:05.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.442 "hdgst": ${hdgst:-false}, 00:21:05.442 "ddgst": ${ddgst:-false} 00:21:05.442 }, 00:21:05.442 "method": "bdev_nvme_attach_controller" 00:21:05.442 } 00:21:05.442 EOF 00:21:05.442 )") 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.442 { 00:21:05.442 "params": { 00:21:05.442 "name": "Nvme$subsystem", 00:21:05.442 "trtype": "$TEST_TRANSPORT", 00:21:05.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.442 "adrfam": "ipv4", 00:21:05.442 "trsvcid": "$NVMF_PORT", 00:21:05.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.442 "hdgst": ${hdgst:-false}, 00:21:05.442 "ddgst": ${ddgst:-false} 00:21:05.442 }, 00:21:05.442 "method": "bdev_nvme_attach_controller" 00:21:05.442 } 00:21:05.442 EOF 00:21:05.442 )") 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.442 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.442 { 00:21:05.442 "params": { 00:21:05.442 "name": "Nvme$subsystem", 00:21:05.442 "trtype": "$TEST_TRANSPORT", 00:21:05.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.442 "adrfam": "ipv4", 00:21:05.442 "trsvcid": "$NVMF_PORT", 00:21:05.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.443 "hdgst": ${hdgst:-false}, 00:21:05.443 "ddgst": ${ddgst:-false} 00:21:05.443 }, 00:21:05.443 "method": "bdev_nvme_attach_controller" 00:21:05.443 } 00:21:05.443 EOF 00:21:05.443 )") 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.443 { 00:21:05.443 "params": { 00:21:05.443 "name": "Nvme$subsystem", 00:21:05.443 "trtype": "$TEST_TRANSPORT", 00:21:05.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.443 "adrfam": "ipv4", 00:21:05.443 "trsvcid": "$NVMF_PORT", 00:21:05.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.443 "hdgst": ${hdgst:-false}, 00:21:05.443 "ddgst": ${ddgst:-false} 00:21:05.443 }, 00:21:05.443 "method": "bdev_nvme_attach_controller" 00:21:05.443 } 00:21:05.443 EOF 00:21:05.443 )") 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.443 { 00:21:05.443 "params": { 00:21:05.443 "name": "Nvme$subsystem", 00:21:05.443 "trtype": "$TEST_TRANSPORT", 00:21:05.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.443 "adrfam": "ipv4", 00:21:05.443 "trsvcid": "$NVMF_PORT", 00:21:05.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.443 "hdgst": ${hdgst:-false}, 00:21:05.443 "ddgst": ${ddgst:-false} 00:21:05.443 }, 00:21:05.443 "method": "bdev_nvme_attach_controller" 00:21:05.443 } 00:21:05.443 EOF 00:21:05.443 )") 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.443 { 00:21:05.443 "params": { 00:21:05.443 "name": "Nvme$subsystem", 00:21:05.443 "trtype": "$TEST_TRANSPORT", 00:21:05.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.443 "adrfam": "ipv4", 00:21:05.443 "trsvcid": "$NVMF_PORT", 00:21:05.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.443 "hdgst": ${hdgst:-false}, 00:21:05.443 "ddgst": ${ddgst:-false} 00:21:05.443 }, 00:21:05.443 "method": "bdev_nvme_attach_controller" 00:21:05.443 } 00:21:05.443 EOF 00:21:05.443 )") 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.443 [2024-07-15 20:46:39.919109] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:05.443 [2024-07-15 20:46:39.919160] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.443 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.443 { 00:21:05.443 "params": { 00:21:05.443 "name": "Nvme$subsystem", 00:21:05.443 "trtype": "$TEST_TRANSPORT", 00:21:05.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.443 "adrfam": "ipv4", 00:21:05.443 "trsvcid": "$NVMF_PORT", 00:21:05.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.443 "hdgst": ${hdgst:-false}, 00:21:05.443 "ddgst": ${ddgst:-false} 00:21:05.443 }, 00:21:05.443 "method": "bdev_nvme_attach_controller" 00:21:05.443 } 00:21:05.443 EOF 00:21:05.443 )") 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.703 { 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme$subsystem", 00:21:05.703 "trtype": "$TEST_TRANSPORT", 00:21:05.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "$NVMF_PORT", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.703 "hdgst": ${hdgst:-false}, 00:21:05.703 "ddgst": ${ddgst:-false} 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 } 00:21:05.703 EOF 00:21:05.703 )") 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.703 { 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme$subsystem", 00:21:05.703 "trtype": "$TEST_TRANSPORT", 00:21:05.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "$NVMF_PORT", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.703 "hdgst": ${hdgst:-false}, 00:21:05.703 "ddgst": ${ddgst:-false} 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 } 00:21:05.703 EOF 00:21:05.703 )") 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:05.703 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:05.703 20:46:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme1", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme2", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme3", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme4", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme5", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme6", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme7", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme8", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme9", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 },{ 00:21:05.703 "params": { 00:21:05.703 "name": "Nvme10", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:05.703 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false 00:21:05.703 }, 00:21:05.703 "method": "bdev_nvme_attach_controller" 00:21:05.703 }' 00:21:05.704 [2024-07-15 20:46:39.976418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.704 [2024-07-15 20:46:40.056140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.082 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.082 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:07.082 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:07.082 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.082 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:07.342 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.342 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2748946 00:21:07.342 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:07.342 20:46:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:08.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2748946 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2748652 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.295 { 00:21:08.295 "params": { 00:21:08.295 "name": "Nvme$subsystem", 00:21:08.295 "trtype": "$TEST_TRANSPORT", 00:21:08.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.295 "adrfam": "ipv4", 00:21:08.295 "trsvcid": "$NVMF_PORT", 00:21:08.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.295 "hdgst": ${hdgst:-false}, 00:21:08.295 "ddgst": ${ddgst:-false} 00:21:08.295 }, 00:21:08.295 "method": "bdev_nvme_attach_controller" 00:21:08.295 } 00:21:08.295 EOF 00:21:08.295 )") 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.295 { 00:21:08.295 "params": { 00:21:08.295 "name": "Nvme$subsystem", 00:21:08.295 "trtype": "$TEST_TRANSPORT", 00:21:08.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.295 "adrfam": "ipv4", 00:21:08.295 "trsvcid": "$NVMF_PORT", 00:21:08.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.295 "hdgst": ${hdgst:-false}, 00:21:08.295 "ddgst": ${ddgst:-false} 00:21:08.295 }, 00:21:08.295 "method": "bdev_nvme_attach_controller" 00:21:08.295 } 00:21:08.295 EOF 00:21:08.295 )") 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.295 { 00:21:08.295 "params": { 00:21:08.295 "name": "Nvme$subsystem", 00:21:08.295 "trtype": "$TEST_TRANSPORT", 00:21:08.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.295 "adrfam": "ipv4", 00:21:08.295 "trsvcid": "$NVMF_PORT", 00:21:08.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.295 "hdgst": ${hdgst:-false}, 00:21:08.295 "ddgst": ${ddgst:-false} 00:21:08.295 }, 00:21:08.295 "method": "bdev_nvme_attach_controller" 00:21:08.295 } 00:21:08.295 EOF 00:21:08.295 )") 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.295 { 00:21:08.295 "params": { 00:21:08.295 "name": "Nvme$subsystem", 00:21:08.295 "trtype": "$TEST_TRANSPORT", 00:21:08.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.295 "adrfam": "ipv4", 00:21:08.295 "trsvcid": "$NVMF_PORT", 00:21:08.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.295 "hdgst": ${hdgst:-false}, 00:21:08.295 "ddgst": ${ddgst:-false} 00:21:08.295 }, 00:21:08.295 "method": "bdev_nvme_attach_controller" 00:21:08.295 } 00:21:08.295 EOF 00:21:08.295 )") 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.295 { 00:21:08.295 "params": { 00:21:08.295 "name": "Nvme$subsystem", 00:21:08.295 "trtype": "$TEST_TRANSPORT", 00:21:08.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.295 "adrfam": "ipv4", 00:21:08.295 "trsvcid": "$NVMF_PORT", 00:21:08.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.295 "hdgst": ${hdgst:-false}, 00:21:08.295 "ddgst": ${ddgst:-false} 00:21:08.295 }, 00:21:08.295 "method": "bdev_nvme_attach_controller" 00:21:08.295 } 00:21:08.295 EOF 00:21:08.295 )") 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.295 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.295 { 00:21:08.295 "params": { 00:21:08.295 "name": "Nvme$subsystem", 00:21:08.295 "trtype": "$TEST_TRANSPORT", 00:21:08.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.295 "adrfam": "ipv4", 00:21:08.295 "trsvcid": "$NVMF_PORT", 00:21:08.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.296 "hdgst": ${hdgst:-false}, 00:21:08.296 "ddgst": ${ddgst:-false} 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 } 00:21:08.296 EOF 00:21:08.296 )") 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.296 { 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme$subsystem", 00:21:08.296 "trtype": "$TEST_TRANSPORT", 00:21:08.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "$NVMF_PORT", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.296 "hdgst": ${hdgst:-false}, 00:21:08.296 "ddgst": ${ddgst:-false} 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 } 00:21:08.296 EOF 00:21:08.296 )") 00:21:08.296 [2024-07-15 20:46:42.627289] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:08.296 [2024-07-15 20:46:42.627340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2749363 ] 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.296 { 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme$subsystem", 00:21:08.296 "trtype": "$TEST_TRANSPORT", 00:21:08.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "$NVMF_PORT", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.296 "hdgst": ${hdgst:-false}, 00:21:08.296 "ddgst": ${ddgst:-false} 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 } 00:21:08.296 EOF 00:21:08.296 )") 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.296 { 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme$subsystem", 00:21:08.296 "trtype": "$TEST_TRANSPORT", 00:21:08.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "$NVMF_PORT", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.296 "hdgst": ${hdgst:-false}, 00:21:08.296 "ddgst": ${ddgst:-false} 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 } 00:21:08.296 EOF 00:21:08.296 )") 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:08.296 { 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme$subsystem", 00:21:08.296 "trtype": "$TEST_TRANSPORT", 00:21:08.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "$NVMF_PORT", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.296 "hdgst": ${hdgst:-false}, 00:21:08.296 "ddgst": ${ddgst:-false} 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 } 00:21:08.296 EOF 00:21:08.296 )") 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:08.296 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:08.296 20:46:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme1", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme2", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme3", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme4", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme5", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme6", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme7", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme8", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme9", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 },{ 00:21:08.296 "params": { 00:21:08.296 "name": "Nvme10", 00:21:08.296 "trtype": "tcp", 00:21:08.296 "traddr": "10.0.0.2", 00:21:08.296 "adrfam": "ipv4", 00:21:08.296 "trsvcid": "4420", 00:21:08.296 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:08.296 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:08.296 "hdgst": false, 00:21:08.296 "ddgst": false 00:21:08.296 }, 00:21:08.296 "method": "bdev_nvme_attach_controller" 00:21:08.296 }' 00:21:08.296 [2024-07-15 20:46:42.685143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.296 [2024-07-15 20:46:42.759786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.673 Running I/O for 1 seconds... 00:21:11.050 00:21:11.050 Latency(us) 00:21:11.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.050 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme1n1 : 1.03 252.21 15.76 0.00 0.00 249331.38 5071.92 224304.08 00:21:11.050 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme2n1 : 1.15 280.64 17.54 0.00 0.00 221863.02 3305.29 213362.42 00:21:11.050 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme3n1 : 1.12 288.16 18.01 0.00 0.00 211297.84 9573.95 213362.42 00:21:11.050 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme4n1 : 1.17 274.34 17.15 0.00 0.00 219746.48 15044.79 217921.45 00:21:11.050 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme5n1 : 1.17 273.00 17.06 0.00 0.00 217654.32 19261.89 217921.45 00:21:11.050 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme6n1 : 1.18 271.93 17.00 0.00 0.00 215255.66 18236.10 222480.47 00:21:11.050 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme7n1 : 1.16 279.19 17.45 0.00 0.00 205509.49 2835.14 206979.78 00:21:11.050 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme8n1 : 1.15 283.17 17.70 0.00 0.00 195905.21 11169.61 198773.54 00:21:11.050 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme9n1 : 1.17 275.97 17.25 0.00 0.00 201157.57 2835.14 227039.50 00:21:11.050 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.050 Verification LBA range: start 0x0 length 0x400 00:21:11.050 Nvme10n1 : 1.18 271.10 16.94 0.00 0.00 201737.08 16184.54 244363.80 00:21:11.050 =================================================================================================================== 00:21:11.050 Total : 2749.71 171.86 0.00 0.00 213225.41 2835.14 244363.80 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.050 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.050 rmmod nvme_tcp 00:21:11.310 rmmod nvme_fabrics 00:21:11.310 rmmod nvme_keyring 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2748652 ']' 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2748652 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2748652 ']' 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2748652 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2748652 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2748652' 00:21:11.310 killing process with pid 2748652 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2748652 00:21:11.310 20:46:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2748652 00:21:11.570 20:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.570 20:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.570 20:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.570 20:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.570 20:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.570 20:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.570 20:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.570 20:46:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.136 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.136 00:21:14.136 real 0m15.095s 00:21:14.136 user 0m35.055s 00:21:14.136 sys 0m5.382s 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:14.137 ************************************ 00:21:14.137 END TEST nvmf_shutdown_tc1 00:21:14.137 ************************************ 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:14.137 ************************************ 00:21:14.137 START TEST nvmf_shutdown_tc2 00:21:14.137 ************************************ 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.137 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.137 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.137 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.137 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:14.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:21:14.137 00:21:14.137 --- 10.0.0.2 ping statistics --- 00:21:14.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.137 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:21:14.137 00:21:14.137 --- 10.0.0.1 ping statistics --- 00:21:14.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.137 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2750526 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2750526 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2750526 ']' 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.137 20:46:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.137 [2024-07-15 20:46:48.498117] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:14.138 [2024-07-15 20:46:48.498158] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.138 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.138 [2024-07-15 20:46:48.555554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.395 [2024-07-15 20:46:48.629042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.395 [2024-07-15 20:46:48.629080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.395 [2024-07-15 20:46:48.629087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.395 [2024-07-15 20:46:48.629093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.395 [2024-07-15 20:46:48.629098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.395 [2024-07-15 20:46:48.629200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.395 [2024-07-15 20:46:48.629283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.395 [2024-07-15 20:46:48.629392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.395 [2024-07-15 20:46:48.629393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.961 [2024-07-15 20:46:49.333262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.961 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.961 Malloc1 00:21:14.961 [2024-07-15 20:46:49.429055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.219 Malloc2 00:21:15.219 Malloc3 00:21:15.219 Malloc4 00:21:15.219 Malloc5 00:21:15.219 Malloc6 00:21:15.219 Malloc7 00:21:15.478 Malloc8 00:21:15.478 Malloc9 00:21:15.478 Malloc10 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2750813 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2750813 /var/tmp/bdevperf.sock 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2750813 ']' 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.478 { 00:21:15.478 "params": { 00:21:15.478 "name": "Nvme$subsystem", 00:21:15.478 "trtype": "$TEST_TRANSPORT", 00:21:15.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.478 "adrfam": "ipv4", 00:21:15.478 "trsvcid": "$NVMF_PORT", 00:21:15.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.478 "hdgst": ${hdgst:-false}, 00:21:15.478 "ddgst": ${ddgst:-false} 00:21:15.478 }, 00:21:15.478 "method": "bdev_nvme_attach_controller" 00:21:15.478 } 00:21:15.478 EOF 00:21:15.478 )") 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.478 { 00:21:15.478 "params": { 00:21:15.478 "name": "Nvme$subsystem", 00:21:15.478 "trtype": "$TEST_TRANSPORT", 00:21:15.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.478 "adrfam": "ipv4", 00:21:15.478 "trsvcid": "$NVMF_PORT", 00:21:15.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.478 "hdgst": ${hdgst:-false}, 00:21:15.478 "ddgst": ${ddgst:-false} 00:21:15.478 }, 00:21:15.478 "method": "bdev_nvme_attach_controller" 00:21:15.478 } 00:21:15.478 EOF 00:21:15.478 )") 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.478 { 00:21:15.478 "params": { 00:21:15.478 "name": "Nvme$subsystem", 00:21:15.478 "trtype": "$TEST_TRANSPORT", 00:21:15.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.478 "adrfam": "ipv4", 00:21:15.478 "trsvcid": "$NVMF_PORT", 00:21:15.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.478 "hdgst": ${hdgst:-false}, 00:21:15.478 "ddgst": ${ddgst:-false} 00:21:15.478 }, 00:21:15.478 "method": "bdev_nvme_attach_controller" 00:21:15.478 } 00:21:15.478 EOF 00:21:15.478 )") 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.478 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.478 { 00:21:15.478 "params": { 00:21:15.478 "name": "Nvme$subsystem", 00:21:15.479 "trtype": "$TEST_TRANSPORT", 00:21:15.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "$NVMF_PORT", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.479 "hdgst": ${hdgst:-false}, 00:21:15.479 "ddgst": ${ddgst:-false} 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 } 00:21:15.479 EOF 00:21:15.479 )") 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.479 { 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme$subsystem", 00:21:15.479 "trtype": "$TEST_TRANSPORT", 00:21:15.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "$NVMF_PORT", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.479 "hdgst": ${hdgst:-false}, 00:21:15.479 "ddgst": ${ddgst:-false} 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 } 00:21:15.479 EOF 00:21:15.479 )") 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.479 { 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme$subsystem", 00:21:15.479 "trtype": "$TEST_TRANSPORT", 00:21:15.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "$NVMF_PORT", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.479 "hdgst": ${hdgst:-false}, 00:21:15.479 "ddgst": ${ddgst:-false} 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 } 00:21:15.479 EOF 00:21:15.479 )") 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.479 { 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme$subsystem", 00:21:15.479 "trtype": "$TEST_TRANSPORT", 00:21:15.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "$NVMF_PORT", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.479 "hdgst": ${hdgst:-false}, 00:21:15.479 "ddgst": ${ddgst:-false} 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 } 00:21:15.479 EOF 00:21:15.479 )") 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.479 [2024-07-15 20:46:49.903197] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:15.479 [2024-07-15 20:46:49.903250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750813 ] 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.479 { 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme$subsystem", 00:21:15.479 "trtype": "$TEST_TRANSPORT", 00:21:15.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "$NVMF_PORT", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.479 "hdgst": ${hdgst:-false}, 00:21:15.479 "ddgst": ${ddgst:-false} 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 } 00:21:15.479 EOF 00:21:15.479 )") 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.479 { 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme$subsystem", 00:21:15.479 "trtype": "$TEST_TRANSPORT", 00:21:15.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "$NVMF_PORT", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.479 "hdgst": ${hdgst:-false}, 00:21:15.479 "ddgst": ${ddgst:-false} 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 } 00:21:15.479 EOF 00:21:15.479 )") 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.479 { 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme$subsystem", 00:21:15.479 "trtype": "$TEST_TRANSPORT", 00:21:15.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "$NVMF_PORT", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.479 "hdgst": ${hdgst:-false}, 00:21:15.479 "ddgst": ${ddgst:-false} 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 } 00:21:15.479 EOF 00:21:15.479 )") 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:15.479 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:15.479 20:46:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme1", 00:21:15.479 "trtype": "tcp", 00:21:15.479 "traddr": "10.0.0.2", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "4420", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.479 "hdgst": false, 00:21:15.479 "ddgst": false 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 },{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme2", 00:21:15.479 "trtype": "tcp", 00:21:15.479 "traddr": "10.0.0.2", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "4420", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:15.479 "hdgst": false, 00:21:15.479 "ddgst": false 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 },{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme3", 00:21:15.479 "trtype": "tcp", 00:21:15.479 "traddr": "10.0.0.2", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "4420", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:15.479 "hdgst": false, 00:21:15.479 "ddgst": false 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 },{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme4", 00:21:15.479 "trtype": "tcp", 00:21:15.479 "traddr": "10.0.0.2", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "4420", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:15.479 "hdgst": false, 00:21:15.479 "ddgst": false 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 },{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme5", 00:21:15.479 "trtype": "tcp", 00:21:15.479 "traddr": "10.0.0.2", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "4420", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:15.479 "hdgst": false, 00:21:15.479 "ddgst": false 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 },{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme6", 00:21:15.479 "trtype": "tcp", 00:21:15.479 "traddr": "10.0.0.2", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "4420", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:15.479 "hdgst": false, 00:21:15.479 "ddgst": false 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 },{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme7", 00:21:15.479 "trtype": "tcp", 00:21:15.479 "traddr": "10.0.0.2", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "4420", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:15.479 "hdgst": false, 00:21:15.479 "ddgst": false 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 },{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme8", 00:21:15.479 "trtype": "tcp", 00:21:15.479 "traddr": "10.0.0.2", 00:21:15.479 "adrfam": "ipv4", 00:21:15.479 "trsvcid": "4420", 00:21:15.479 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:15.479 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:15.479 "hdgst": false, 00:21:15.479 "ddgst": false 00:21:15.479 }, 00:21:15.479 "method": "bdev_nvme_attach_controller" 00:21:15.479 },{ 00:21:15.479 "params": { 00:21:15.479 "name": "Nvme9", 00:21:15.479 "trtype": "tcp", 00:21:15.480 "traddr": "10.0.0.2", 00:21:15.480 "adrfam": "ipv4", 00:21:15.480 "trsvcid": "4420", 00:21:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:15.480 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:15.480 "hdgst": false, 00:21:15.480 "ddgst": false 00:21:15.480 }, 00:21:15.480 "method": "bdev_nvme_attach_controller" 00:21:15.480 },{ 00:21:15.480 "params": { 00:21:15.480 "name": "Nvme10", 00:21:15.480 "trtype": "tcp", 00:21:15.480 "traddr": "10.0.0.2", 00:21:15.480 "adrfam": "ipv4", 00:21:15.480 "trsvcid": "4420", 00:21:15.480 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:15.480 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:15.480 "hdgst": false, 00:21:15.480 "ddgst": false 00:21:15.480 }, 00:21:15.480 "method": "bdev_nvme_attach_controller" 00:21:15.480 }' 00:21:15.480 [2024-07-15 20:46:49.959107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.738 [2024-07-15 20:46:50.034481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.114 Running I/O for 10 seconds... 00:21:17.114 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.114 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:17.114 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:17.114 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.114 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:17.373 20:46:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:17.631 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2750813 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2750813 ']' 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2750813 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2750813 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2750813' 00:21:17.632 killing process with pid 2750813 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2750813 00:21:17.632 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2750813 00:21:17.890 Received shutdown signal, test time was about 0.636219 seconds 00:21:17.890 00:21:17.890 Latency(us) 00:21:17.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.890 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme1n1 : 0.59 326.65 20.42 0.00 0.00 192762.21 18464.06 218833.25 00:21:17.890 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme2n1 : 0.59 325.01 20.31 0.00 0.00 188349.22 15728.64 215186.03 00:21:17.890 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme3n1 : 0.60 321.79 20.11 0.00 0.00 185196.19 22909.11 196038.12 00:21:17.890 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme4n1 : 0.60 320.74 20.05 0.00 0.00 180499.74 18464.06 212450.62 00:21:17.890 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme5n1 : 0.64 302.10 18.88 0.00 0.00 174494.05 18350.08 217921.45 00:21:17.890 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme6n1 : 0.56 227.73 14.23 0.00 0.00 237052.44 33964.74 195126.32 00:21:17.890 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme7n1 : 0.57 225.04 14.06 0.00 0.00 231198.72 36244.26 191479.10 00:21:17.890 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme8n1 : 0.57 225.62 14.10 0.00 0.00 224059.21 34420.65 184184.65 00:21:17.890 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme9n1 : 0.58 233.73 14.61 0.00 0.00 208082.55 2165.54 216097.84 00:21:17.890 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.890 Verification LBA range: start 0x0 length 0x400 00:21:17.890 Nvme10n1 : 0.58 228.61 14.29 0.00 0.00 206533.04 2421.98 240716.58 00:21:17.890 =================================================================================================================== 00:21:17.890 Total : 2737.03 171.06 0.00 0.00 199172.23 2165.54 240716.58 00:21:18.149 20:46:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2750526 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.085 rmmod nvme_tcp 00:21:19.085 rmmod nvme_fabrics 00:21:19.085 rmmod nvme_keyring 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2750526 ']' 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2750526 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2750526 ']' 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2750526 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2750526 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2750526' 00:21:19.085 killing process with pid 2750526 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2750526 00:21:19.085 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2750526 00:21:19.685 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.685 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.685 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.685 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.685 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.685 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.685 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.685 20:46:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.589 20:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.589 00:21:21.589 real 0m7.835s 00:21:21.589 user 0m23.521s 00:21:21.589 sys 0m1.213s 00:21:21.589 20:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.589 20:46:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.589 ************************************ 00:21:21.589 END TEST nvmf_shutdown_tc2 00:21:21.589 ************************************ 00:21:21.589 20:46:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:21.589 20:46:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:21.589 20:46:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:21.589 20:46:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.590 ************************************ 00:21:21.590 START TEST nvmf_shutdown_tc3 00:21:21.590 ************************************ 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:21.590 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:21.590 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:21.590 Found net devices under 0000:86:00.0: cvl_0_0 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:21.590 Found net devices under 0000:86:00.1: cvl_0_1 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.590 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.591 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.591 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:21.591 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.591 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.591 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:21.591 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.591 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.591 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:21.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:21:21.850 00:21:21.850 --- 10.0.0.2 ping statistics --- 00:21:21.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.850 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:21:21.850 00:21:21.850 --- 10.0.0.1 ping statistics --- 00:21:21.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.850 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:21.850 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2751858 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2751858 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2751858 ']' 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.109 20:46:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:22.109 [2024-07-15 20:46:56.397773] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:22.109 [2024-07-15 20:46:56.397816] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.109 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.109 [2024-07-15 20:46:56.454605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.109 [2024-07-15 20:46:56.536156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.109 [2024-07-15 20:46:56.536193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.109 [2024-07-15 20:46:56.536200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.109 [2024-07-15 20:46:56.536207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.109 [2024-07-15 20:46:56.536212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.109 [2024-07-15 20:46:56.536334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.109 [2024-07-15 20:46:56.536421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.109 [2024-07-15 20:46:56.536529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.109 [2024-07-15 20:46:56.536530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.044 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.044 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:23.044 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.045 [2024-07-15 20:46:57.249312] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.045 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.045 Malloc1 00:21:23.045 [2024-07-15 20:46:57.340954] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.045 Malloc2 00:21:23.045 Malloc3 00:21:23.045 Malloc4 00:21:23.045 Malloc5 00:21:23.304 Malloc6 00:21:23.304 Malloc7 00:21:23.304 Malloc8 00:21:23.304 Malloc9 00:21:23.304 Malloc10 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2752137 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2752137 /var/tmp/bdevperf.sock 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2752137 ']' 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.304 { 00:21:23.304 "params": { 00:21:23.304 "name": "Nvme$subsystem", 00:21:23.304 "trtype": "$TEST_TRANSPORT", 00:21:23.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.304 "adrfam": "ipv4", 00:21:23.304 "trsvcid": "$NVMF_PORT", 00:21:23.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.304 "hdgst": ${hdgst:-false}, 00:21:23.304 "ddgst": ${ddgst:-false} 00:21:23.304 }, 00:21:23.304 "method": "bdev_nvme_attach_controller" 00:21:23.304 } 00:21:23.304 EOF 00:21:23.304 )") 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.304 { 00:21:23.304 "params": { 00:21:23.304 "name": "Nvme$subsystem", 00:21:23.304 "trtype": "$TEST_TRANSPORT", 00:21:23.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.304 "adrfam": "ipv4", 00:21:23.304 "trsvcid": "$NVMF_PORT", 00:21:23.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.304 "hdgst": ${hdgst:-false}, 00:21:23.304 "ddgst": ${ddgst:-false} 00:21:23.304 }, 00:21:23.304 "method": "bdev_nvme_attach_controller" 00:21:23.304 } 00:21:23.304 EOF 00:21:23.304 )") 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.304 { 00:21:23.304 "params": { 00:21:23.304 "name": "Nvme$subsystem", 00:21:23.304 "trtype": "$TEST_TRANSPORT", 00:21:23.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.304 "adrfam": "ipv4", 00:21:23.304 "trsvcid": "$NVMF_PORT", 00:21:23.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.304 "hdgst": ${hdgst:-false}, 00:21:23.304 "ddgst": ${ddgst:-false} 00:21:23.304 }, 00:21:23.304 "method": "bdev_nvme_attach_controller" 00:21:23.304 } 00:21:23.304 EOF 00:21:23.304 )") 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.304 { 00:21:23.304 "params": { 00:21:23.304 "name": "Nvme$subsystem", 00:21:23.304 "trtype": "$TEST_TRANSPORT", 00:21:23.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.304 "adrfam": "ipv4", 00:21:23.304 "trsvcid": "$NVMF_PORT", 00:21:23.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.304 "hdgst": ${hdgst:-false}, 00:21:23.304 "ddgst": ${ddgst:-false} 00:21:23.304 }, 00:21:23.304 "method": "bdev_nvme_attach_controller" 00:21:23.304 } 00:21:23.304 EOF 00:21:23.304 )") 00:21:23.304 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.564 { 00:21:23.564 "params": { 00:21:23.564 "name": "Nvme$subsystem", 00:21:23.564 "trtype": "$TEST_TRANSPORT", 00:21:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.564 "adrfam": "ipv4", 00:21:23.564 "trsvcid": "$NVMF_PORT", 00:21:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.564 "hdgst": ${hdgst:-false}, 00:21:23.564 "ddgst": ${ddgst:-false} 00:21:23.564 }, 00:21:23.564 "method": "bdev_nvme_attach_controller" 00:21:23.564 } 00:21:23.564 EOF 00:21:23.564 )") 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.564 { 00:21:23.564 "params": { 00:21:23.564 "name": "Nvme$subsystem", 00:21:23.564 "trtype": "$TEST_TRANSPORT", 00:21:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.564 "adrfam": "ipv4", 00:21:23.564 "trsvcid": "$NVMF_PORT", 00:21:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.564 "hdgst": ${hdgst:-false}, 00:21:23.564 "ddgst": ${ddgst:-false} 00:21:23.564 }, 00:21:23.564 "method": "bdev_nvme_attach_controller" 00:21:23.564 } 00:21:23.564 EOF 00:21:23.564 )") 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.564 { 00:21:23.564 "params": { 00:21:23.564 "name": "Nvme$subsystem", 00:21:23.564 "trtype": "$TEST_TRANSPORT", 00:21:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.564 "adrfam": "ipv4", 00:21:23.564 "trsvcid": "$NVMF_PORT", 00:21:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.564 "hdgst": ${hdgst:-false}, 00:21:23.564 "ddgst": ${ddgst:-false} 00:21:23.564 }, 00:21:23.564 "method": "bdev_nvme_attach_controller" 00:21:23.564 } 00:21:23.564 EOF 00:21:23.564 )") 00:21:23.564 [2024-07-15 20:46:57.804411] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:23.564 [2024-07-15 20:46:57.804459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752137 ] 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.564 { 00:21:23.564 "params": { 00:21:23.564 "name": "Nvme$subsystem", 00:21:23.564 "trtype": "$TEST_TRANSPORT", 00:21:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.564 "adrfam": "ipv4", 00:21:23.564 "trsvcid": "$NVMF_PORT", 00:21:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.564 "hdgst": ${hdgst:-false}, 00:21:23.564 "ddgst": ${ddgst:-false} 00:21:23.564 }, 00:21:23.564 "method": "bdev_nvme_attach_controller" 00:21:23.564 } 00:21:23.564 EOF 00:21:23.564 )") 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.564 { 00:21:23.564 "params": { 00:21:23.564 "name": "Nvme$subsystem", 00:21:23.564 "trtype": "$TEST_TRANSPORT", 00:21:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.564 "adrfam": "ipv4", 00:21:23.564 "trsvcid": "$NVMF_PORT", 00:21:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.564 "hdgst": ${hdgst:-false}, 00:21:23.564 "ddgst": ${ddgst:-false} 00:21:23.564 }, 00:21:23.564 "method": "bdev_nvme_attach_controller" 00:21:23.564 } 00:21:23.564 EOF 00:21:23.564 )") 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.564 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.564 { 00:21:23.564 "params": { 00:21:23.564 "name": "Nvme$subsystem", 00:21:23.564 "trtype": "$TEST_TRANSPORT", 00:21:23.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.564 "adrfam": "ipv4", 00:21:23.564 "trsvcid": "$NVMF_PORT", 00:21:23.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.564 "hdgst": ${hdgst:-false}, 00:21:23.564 "ddgst": ${ddgst:-false} 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 } 00:21:23.565 EOF 00:21:23.565 )") 00:21:23.565 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:23.565 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.565 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:23.565 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:23.565 20:46:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme1", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme2", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme3", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme4", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme5", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme6", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme7", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme8", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme9", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 },{ 00:21:23.565 "params": { 00:21:23.565 "name": "Nvme10", 00:21:23.565 "trtype": "tcp", 00:21:23.565 "traddr": "10.0.0.2", 00:21:23.565 "adrfam": "ipv4", 00:21:23.565 "trsvcid": "4420", 00:21:23.565 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:23.565 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:23.565 "hdgst": false, 00:21:23.565 "ddgst": false 00:21:23.565 }, 00:21:23.565 "method": "bdev_nvme_attach_controller" 00:21:23.565 }' 00:21:23.565 [2024-07-15 20:46:57.859847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.565 [2024-07-15 20:46:57.933234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.464 Running I/O for 10 seconds... 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:26.036 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2751858 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2751858 ']' 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2751858 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2751858 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2751858' 00:21:26.037 killing process with pid 2751858 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2751858 00:21:26.037 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2751858 00:21:26.037 [2024-07-15 20:47:00.485878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.485995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.037 [2024-07-15 20:47:00.486275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.486341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77ad0 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.038 [2024-07-15 20:47:00.488617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.488623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.488629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.488635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77f70 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 20:47:00.489926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 he state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.039 [2024-07-15 20:47:00.489952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 [2024-07-15 20:47:00.489959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.039 [2024-07-15 20:47:00.489967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 [2024-07-15 20:47:00.489975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.039 [2024-07-15 20:47:00.489982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 [2024-07-15 20:47:00.489990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.489996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 20:47:00.489997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.039 he state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1dc70 is same w[2024-07-15 20:47:00.490006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tith the state(5) to be set 00:21:26.039 he state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 [2024-07-15 20:47:00.490049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.039 [2024-07-15 20:47:00.490057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 20:47:00.490064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 he state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 20:47:00.490073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.039 he state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 [2024-07-15 20:47:00.490089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.039 [2024-07-15 20:47:00.490095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 [2024-07-15 20:47:00.490103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.039 [2024-07-15 20:47:00.490111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd28b0 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.039 [2024-07-15 20:47:00.490168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.039 [2024-07-15 20:47:00.490172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.040 [2024-07-15 20:47:00.490179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.040 [2024-07-15 20:47:00.490189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.040 [2024-07-15 20:47:00.490197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.040 [2024-07-15 20:47:00.490205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.040 [2024-07-15 20:47:00.490213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.040 [2024-07-15 20:47:00.490221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 20:47:00.490233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.040 he state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe98d0 is same w[2024-07-15 20:47:00.490241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tith the state(5) to be set 00:21:26.040 he state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.040 [2024-07-15 20:47:00.490270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.040 [2024-07-15 20:47:00.490277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 20:47:00.490284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:26.040 he state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 20:47:00.490294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78410 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.040 he state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.490305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.040 [2024-07-15 20:47:00.490313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.040 [2024-07-15 20:47:00.490321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.040 [2024-07-15 20:47:00.490329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.040 [2024-07-15 20:47:00.490335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2050 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.040 [2024-07-15 20:47:00.491541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc788d0 is same with the state(5) to be set 00:21:26.041 [2024-07-15 20:47:00.491677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.491988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.491996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.492004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.492013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.492020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.492028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.492034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.492042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.492049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.492056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.492062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.041 [2024-07-15 20:47:00.492071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.041 [2024-07-15 20:47:00.492078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t[2024-07-15 20:47:00.492413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:26.042 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t[2024-07-15 20:47:00.492424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1he state(5) to be set 00:21:26.042 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-07-15 20:47:00.492479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 he state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.492488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 he state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t[2024-07-15 20:47:00.492533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1he state(5) to be set 00:21:26.042 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 [2024-07-15 20:47:00.492544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.042 [2024-07-15 20:47:00.492545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.042 [2024-07-15 20:47:00.492552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-07-15 20:47:00.492552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.042 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.492563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 [2024-07-15 20:47:00.492580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 [2024-07-15 20:47:00.492588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 [2024-07-15 20:47:00.492594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 [2024-07-15 20:47:00.492601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-07-15 20:47:00.492609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.492619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 [2024-07-15 20:47:00.492636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 [2024-07-15 20:47:00.492643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 [2024-07-15 20:47:00.492651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 [2024-07-15 20:47:00.492658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t[2024-07-15 20:47:00.492665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1he state(5) to be set 00:21:26.043 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 [2024-07-15 20:47:00.492676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.492677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-07-15 20:47:00.492688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.492697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 [2024-07-15 20:47:00.492712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 [2024-07-15 20:47:00.492720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-07-15 20:47:00.492728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.043 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.492737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.043 he state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.492861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78c40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.493137] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb47ef0 was disconnected and freed. reset controller. 00:21:26.043 [2024-07-15 20:47:00.494764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.494998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.495004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.495010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.495016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.495023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.495029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.495035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.043 [2024-07-15 20:47:00.495040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79a40 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:26.044 [2024-07-15 20:47:00.495257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd28b0 (9): Bad file descriptor 00:21:26.044 [2024-07-15 20:47:00.495553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-07-15 20:47:00.495758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 he state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.495767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 he state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.495805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 he state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.495877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 he state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with t[2024-07-15 20:47:00.495887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128he state(5) to be set 00:21:26.044 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-15 20:47:00.495908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 he state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.495953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 he state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.044 [2024-07-15 20:47:00.495967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.044 [2024-07-15 20:47:00.495971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.044 [2024-07-15 20:47:00.495975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.495981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-07-15 20:47:00.495981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 he state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.495991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.495991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 he state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with t[2024-07-15 20:47:00.496051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12he state(5) to be set 00:21:26.045 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:47:00.496151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 he state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79ee0 is same with the state(5) to be set 00:21:26.045 [2024-07-15 20:47:00.496198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.045 [2024-07-15 20:47:00.496912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.045 [2024-07-15 20:47:00.496964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.497915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.497963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498176] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb5f910 was disconnected and freed. reset controller. 00:21:26.046 [2024-07-15 20:47:00.498241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.498910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.498958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.499011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.499060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.499116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.512854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.512881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.512891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.512903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.512913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.512925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.512936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.512947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.512957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.512969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.512979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.512991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.046 [2024-07-15 20:47:00.513489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.046 [2024-07-15 20:47:00.513499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.047 [2024-07-15 20:47:00.513520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.047 [2024-07-15 20:47:00.513542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.047 [2024-07-15 20:47:00.513563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.047 [2024-07-15 20:47:00.513585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.047 [2024-07-15 20:47:00.513609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.047 [2024-07-15 20:47:00.513630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.047 [2024-07-15 20:47:00.513652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.047 [2024-07-15 20:47:00.513673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.047 [2024-07-15 20:47:00.513684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.312 [2024-07-15 20:47:00.513694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.513982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.513994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.514004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.514025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.514046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.313 [2024-07-15 20:47:00.514067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514144] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb60de0 was disconnected and freed. reset controller. 00:21:26.313 [2024-07-15 20:47:00.514378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1dc70 (9): Bad file descriptor 00:21:26.313 [2024-07-15 20:47:00.514428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64bf0 is same with the state(5) to be set 00:21:26.313 [2024-07-15 20:47:00.514556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5a1d0 is same with the state(5) to be set 00:21:26.313 [2024-07-15 20:47:00.514671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.313 [2024-07-15 20:47:00.514703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.313 [2024-07-15 20:47:00.514716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61b30 is same with the state(5) to be set 00:21:26.314 [2024-07-15 20:47:00.514784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40190 is same with the state(5) to be set 00:21:26.314 [2024-07-15 20:47:00.514897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.514971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.514981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56c340 is same with the state(5) to be set 00:21:26.314 [2024-07-15 20:47:00.515000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe98d0 (9): Bad file descriptor 00:21:26.314 [2024-07-15 20:47:00.515022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf2050 (9): Bad file descriptor 00:21:26.314 [2024-07-15 20:47:00.515056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.515068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.515079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.515088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.515099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.515109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.515119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.314 [2024-07-15 20:47:00.515130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.515142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe90d0 is same with the state(5) to be set 00:21:26.314 [2024-07-15 20:47:00.518608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:26.314 [2024-07-15 20:47:00.518647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:26.314 [2024-07-15 20:47:00.518664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40190 (9): Bad file descriptor 00:21:26.314 [2024-07-15 20:47:00.518679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa61b30 (9): Bad file descriptor 00:21:26.314 [2024-07-15 20:47:00.518965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.314 [2024-07-15 20:47:00.518987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd28b0 with addr=10.0.0.2, port=4420 00:21:26.314 [2024-07-15 20:47:00.518997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd28b0 is same with the state(5) to be set 00:21:26.314 [2024-07-15 20:47:00.519068] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.314 [2024-07-15 20:47:00.519126] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.314 [2024-07-15 20:47:00.519529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd28b0 (9): Bad file descriptor 00:21:26.314 [2024-07-15 20:47:00.519621] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.314 [2024-07-15 20:47:00.519678] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.314 [2024-07-15 20:47:00.520711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.314 [2024-07-15 20:47:00.520734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa61b30 with addr=10.0.0.2, port=4420 00:21:26.314 [2024-07-15 20:47:00.520746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61b30 is same with the state(5) to be set 00:21:26.314 [2024-07-15 20:47:00.520963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.314 [2024-07-15 20:47:00.520979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa40190 with addr=10.0.0.2, port=4420 00:21:26.314 [2024-07-15 20:47:00.520989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40190 is same with the state(5) to be set 00:21:26.314 [2024-07-15 20:47:00.520998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:26.314 [2024-07-15 20:47:00.521009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:26.314 [2024-07-15 20:47:00.521020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:26.314 [2024-07-15 20:47:00.521102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.314 [2024-07-15 20:47:00.521118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.521135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.314 [2024-07-15 20:47:00.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.521159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.314 [2024-07-15 20:47:00.521170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.314 [2024-07-15 20:47:00.521183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.314 [2024-07-15 20:47:00.521194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.315 [2024-07-15 20:47:00.521860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.315 [2024-07-15 20:47:00.521872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.521881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.521893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.521903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.521915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.521924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.521937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.521947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.521959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.521969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.521981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.521990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.316 [2024-07-15 20:47:00.522512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.316 [2024-07-15 20:47:00.522524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.522534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.522546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.522556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.522566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19040 is same with the state(5) to be set 00:21:26.317 [2024-07-15 20:47:00.522627] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa19040 was disconnected and freed. reset controller. 00:21:26.317 [2024-07-15 20:47:00.522711] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.317 [2024-07-15 20:47:00.522746] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.317 [2024-07-15 20:47:00.522761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.317 [2024-07-15 20:47:00.522778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa61b30 (9): Bad file descriptor 00:21:26.317 [2024-07-15 20:47:00.522794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40190 (9): Bad file descriptor 00:21:26.317 [2024-07-15 20:47:00.524056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:26.317 [2024-07-15 20:47:00.524080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56c340 (9): Bad file descriptor 00:21:26.317 [2024-07-15 20:47:00.524091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:26.317 [2024-07-15 20:47:00.524100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:26.317 [2024-07-15 20:47:00.524109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:26.317 [2024-07-15 20:47:00.524122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:26.317 [2024-07-15 20:47:00.524131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:26.317 [2024-07-15 20:47:00.524139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:26.317 [2024-07-15 20:47:00.524190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.317 [2024-07-15 20:47:00.524200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.317 [2024-07-15 20:47:00.524768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.317 [2024-07-15 20:47:00.524787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x56c340 with addr=10.0.0.2, port=4420 00:21:26.317 [2024-07-15 20:47:00.524797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56c340 is same with the state(5) to be set 00:21:26.317 [2024-07-15 20:47:00.524816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa64bf0 (9): Bad file descriptor 00:21:26.317 [2024-07-15 20:47:00.524837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5a1d0 (9): Bad file descriptor 00:21:26.317 [2024-07-15 20:47:00.524870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe90d0 (9): Bad file descriptor 00:21:26.317 [2024-07-15 20:47:00.524953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56c340 (9): Bad file descriptor 00:21:26.317 [2024-07-15 20:47:00.525005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.317 [2024-07-15 20:47:00.525338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.317 [2024-07-15 20:47:00.525347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.318 [2024-07-15 20:47:00.525852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.318 [2024-07-15 20:47:00.525863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.319 [2024-07-15 20:47:00.525871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.525882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.525893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.525903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.525911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.525921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.525930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.525940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.525948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.525959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.525967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.525978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.525986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.525996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.526255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.526264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5eea0 is same with the state(5) to be set 00:21:26.320 [2024-07-15 20:47:00.527542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.527558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.527570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.527580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.527590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.527598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.527608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.527619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.527629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.527638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.527649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.527657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.527667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.527675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.527686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.320 [2024-07-15 20:47:00.527694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.320 [2024-07-15 20:47:00.527703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.527987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.527995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.321 [2024-07-15 20:47:00.528255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.321 [2024-07-15 20:47:00.528263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.528765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.528774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5490 is same with the state(5) to be set 00:21:26.322 [2024-07-15 20:47:00.530042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.530057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.322 [2024-07-15 20:47:00.530070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.322 [2024-07-15 20:47:00.530082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.323 [2024-07-15 20:47:00.530668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.323 [2024-07-15 20:47:00.530678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.530981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.530990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.324 [2024-07-15 20:47:00.531202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.324 [2024-07-15 20:47:00.531211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6920 is same with the state(5) to be set 00:21:26.324 [2024-07-15 20:47:00.532454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:26.324 [2024-07-15 20:47:00.532475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.324 [2024-07-15 20:47:00.532486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:26.324 [2024-07-15 20:47:00.532497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:26.324 [2024-07-15 20:47:00.532534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:26.324 [2024-07-15 20:47:00.532545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:26.325 [2024-07-15 20:47:00.532554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:26.325 [2024-07-15 20:47:00.532636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.325 [2024-07-15 20:47:00.532950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.325 [2024-07-15 20:47:00.532964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd28b0 with addr=10.0.0.2, port=4420 00:21:26.325 [2024-07-15 20:47:00.532972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd28b0 is same with the state(5) to be set 00:21:26.325 [2024-07-15 20:47:00.533208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.325 [2024-07-15 20:47:00.533221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1dc70 with addr=10.0.0.2, port=4420 00:21:26.325 [2024-07-15 20:47:00.533233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1dc70 is same with the state(5) to be set 00:21:26.325 [2024-07-15 20:47:00.533418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.325 [2024-07-15 20:47:00.533430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe98d0 with addr=10.0.0.2, port=4420 00:21:26.325 [2024-07-15 20:47:00.533437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe98d0 is same with the state(5) to be set 00:21:26.325 [2024-07-15 20:47:00.533624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.325 [2024-07-15 20:47:00.533635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf2050 with addr=10.0.0.2, port=4420 00:21:26.325 [2024-07-15 20:47:00.533643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2050 is same with the state(5) to be set 00:21:26.325 [2024-07-15 20:47:00.534321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:26.325 [2024-07-15 20:47:00.534335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:26.325 [2024-07-15 20:47:00.534358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd28b0 (9): Bad file descriptor 00:21:26.325 [2024-07-15 20:47:00.534369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1dc70 (9): Bad file descriptor 00:21:26.325 [2024-07-15 20:47:00.534377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe98d0 (9): Bad file descriptor 00:21:26.325 [2024-07-15 20:47:00.534386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf2050 (9): Bad file descriptor 00:21:26.325 [2024-07-15 20:47:00.534669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.325 [2024-07-15 20:47:00.534683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa40190 with addr=10.0.0.2, port=4420 00:21:26.325 [2024-07-15 20:47:00.534691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40190 is same with the state(5) to be set 00:21:26.325 [2024-07-15 20:47:00.534899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.325 [2024-07-15 20:47:00.534911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa61b30 with addr=10.0.0.2, port=4420 00:21:26.325 [2024-07-15 20:47:00.534918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61b30 is same with the state(5) to be set 00:21:26.325 [2024-07-15 20:47:00.534925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:26.325 [2024-07-15 20:47:00.534932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:26.325 [2024-07-15 20:47:00.534940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:26.325 [2024-07-15 20:47:00.534950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.325 [2024-07-15 20:47:00.534958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.325 [2024-07-15 20:47:00.534964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.325 [2024-07-15 20:47:00.534974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:26.325 [2024-07-15 20:47:00.534980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:26.325 [2024-07-15 20:47:00.534988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:26.325 [2024-07-15 20:47:00.535000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:26.325 [2024-07-15 20:47:00.535007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:26.325 [2024-07-15 20:47:00.535014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:26.325 [2024-07-15 20:47:00.535054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:26.325 [2024-07-15 20:47:00.535065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.325 [2024-07-15 20:47:00.535073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.325 [2024-07-15 20:47:00.535078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.325 [2024-07-15 20:47:00.535084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.325 [2024-07-15 20:47:00.535098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40190 (9): Bad file descriptor 00:21:26.325 [2024-07-15 20:47:00.535107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa61b30 (9): Bad file descriptor 00:21:26.325 [2024-07-15 20:47:00.535413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.325 [2024-07-15 20:47:00.535428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x56c340 with addr=10.0.0.2, port=4420 00:21:26.325 [2024-07-15 20:47:00.535435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56c340 is same with the state(5) to be set 00:21:26.325 [2024-07-15 20:47:00.535442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:26.325 [2024-07-15 20:47:00.535450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:26.325 [2024-07-15 20:47:00.535457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:26.325 [2024-07-15 20:47:00.535467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:26.325 [2024-07-15 20:47:00.535475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:26.325 [2024-07-15 20:47:00.535482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:26.325 [2024-07-15 20:47:00.535527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.325 [2024-07-15 20:47:00.535536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.325 [2024-07-15 20:47:00.535547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.326 [2024-07-15 20:47:00.535980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.326 [2024-07-15 20:47:00.535989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.535996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.327 [2024-07-15 20:47:00.536458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.327 [2024-07-15 20:47:00.536467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.536482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.536490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.536498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.536506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.536513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.536522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.536529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.536537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.536544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.536552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.536560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.536567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17b70 is same with the state(5) to be set 00:21:26.328 [2024-07-15 20:47:00.537629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.537989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.537998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.538004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.538013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.328 [2024-07-15 20:47:00.538020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.328 [2024-07-15 20:47:00.538029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.329 [2024-07-15 20:47:00.538519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.329 [2024-07-15 20:47:00.538526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.538669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.538676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb622b0 is same with the state(5) to be set 00:21:26.330 [2024-07-15 20:47:00.539706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.539984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.539991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.540000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.540008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.330 [2024-07-15 20:47:00.540016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.330 [2024-07-15 20:47:00.540024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.331 [2024-07-15 20:47:00.540398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.331 [2024-07-15 20:47:00.540406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.332 [2024-07-15 20:47:00.540712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.332 [2024-07-15 20:47:00.540721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46a60 is same with the state(5) to be set 00:21:26.332 [2024-07-15 20:47:00.542011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.332 [2024-07-15 20:47:00.542024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.332 [2024-07-15 20:47:00.542033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:26.332 [2024-07-15 20:47:00.542042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:26.332 task offset: 24576 on job bdev=Nvme10n1 fails 00:21:26.332 00:21:26.332 Latency(us) 00:21:26.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.332 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.332 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:26.332 Verification LBA range: start 0x0 length 0x400 00:21:26.332 Nvme1n1 : 0.91 209.91 13.12 69.97 0.00 226425.32 17210.32 218833.25 00:21:26.332 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.332 Job: Nvme2n1 ended in about 0.92 seconds with error 00:21:26.332 Verification LBA range: start 0x0 length 0x400 00:21:26.332 Nvme2n1 : 0.92 209.34 13.08 69.78 0.00 223060.59 18464.06 221568.67 00:21:26.332 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.332 Job: Nvme3n1 ended in about 0.92 seconds with error 00:21:26.332 Verification LBA range: start 0x0 length 0x400 00:21:26.332 Nvme3n1 : 0.92 214.23 13.39 64.16 0.00 219479.26 15044.79 217009.64 00:21:26.332 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.332 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:26.332 Verification LBA range: start 0x0 length 0x400 00:21:26.332 Nvme4n1 : 0.92 214.08 13.38 69.20 0.00 212130.87 10200.82 216097.84 00:21:26.332 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.332 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:26.332 Verification LBA range: start 0x0 length 0x400 00:21:26.332 Nvme5n1 : 0.91 210.70 13.17 70.23 0.00 209741.69 16868.40 205156.17 00:21:26.332 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.332 Job: Nvme6n1 ended in about 0.90 seconds with error 00:21:26.332 Verification LBA range: start 0x0 length 0x400 00:21:26.332 Nvme6n1 : 0.90 212.40 13.28 70.80 0.00 203973.23 19033.93 220656.86 00:21:26.333 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.333 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:26.333 Verification LBA range: start 0x0 length 0x400 00:21:26.333 Nvme7n1 : 0.91 212.09 13.26 70.70 0.00 200309.09 17096.35 220656.86 00:21:26.333 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.333 Job: Nvme8n1 ended in about 0.93 seconds with error 00:21:26.333 Verification LBA range: start 0x0 length 0x400 00:21:26.333 Nvme8n1 : 0.93 207.12 12.95 69.04 0.00 201797.90 15158.76 194214.51 00:21:26.333 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.333 Job: Nvme9n1 ended in about 0.93 seconds with error 00:21:26.333 Verification LBA range: start 0x0 length 0x400 00:21:26.333 Nvme9n1 : 0.93 137.78 8.61 68.89 0.00 264606.35 18805.98 257129.07 00:21:26.333 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.333 Job: Nvme10n1 ended in about 0.88 seconds with error 00:21:26.333 Verification LBA range: start 0x0 length 0x400 00:21:26.333 Nvme10n1 : 0.88 217.52 13.60 72.51 0.00 182829.75 4616.01 244363.80 00:21:26.333 =================================================================================================================== 00:21:26.333 Total : 2045.17 127.82 695.28 0.00 213146.53 4616.01 257129.07 00:21:26.333 [2024-07-15 20:47:00.564016] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:26.333 [2024-07-15 20:47:00.564050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:26.333 [2024-07-15 20:47:00.564098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56c340 (9): Bad file descriptor 00:21:26.333 [2024-07-15 20:47:00.564448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.564468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5a1d0 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.564478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5a1d0 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.564734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.564746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa64bf0 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.564754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64bf0 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.565050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.565062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe90d0 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.565069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe90d0 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.565077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:26.333 [2024-07-15 20:47:00.565084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:26.333 [2024-07-15 20:47:00.565092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:26.333 [2024-07-15 20:47:00.565142] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:26.333 [2024-07-15 20:47:00.565854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.333 [2024-07-15 20:47:00.565890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5a1d0 (9): Bad file descriptor 00:21:26.333 [2024-07-15 20:47:00.565902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa64bf0 (9): Bad file descriptor 00:21:26.333 [2024-07-15 20:47:00.565910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe90d0 (9): Bad file descriptor 00:21:26.333 [2024-07-15 20:47:00.565956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:26.333 [2024-07-15 20:47:00.565967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:26.333 [2024-07-15 20:47:00.565976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.333 [2024-07-15 20:47:00.565984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:26.333 [2024-07-15 20:47:00.565992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:26.333 [2024-07-15 20:47:00.566001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:26.333 [2024-07-15 20:47:00.566037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:26.333 [2024-07-15 20:47:00.566045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:26.333 [2024-07-15 20:47:00.566051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:26.333 [2024-07-15 20:47:00.566060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:26.333 [2024-07-15 20:47:00.566082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:26.333 [2024-07-15 20:47:00.566089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:26.333 [2024-07-15 20:47:00.566098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:26.333 [2024-07-15 20:47:00.566105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:26.333 [2024-07-15 20:47:00.566111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:26.333 [2024-07-15 20:47:00.566156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.333 [2024-07-15 20:47:00.566165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.333 [2024-07-15 20:47:00.566171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.333 [2024-07-15 20:47:00.566459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.566473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf2050 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.566481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2050 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.566740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.566751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe98d0 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.566758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe98d0 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.567022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.567032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1dc70 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.567040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1dc70 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.567276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.567286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd28b0 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.567294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd28b0 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.567463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.567474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa61b30 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.567481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61b30 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.567692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.333 [2024-07-15 20:47:00.567703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa40190 with addr=10.0.0.2, port=4420 00:21:26.333 [2024-07-15 20:47:00.567710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40190 is same with the state(5) to be set 00:21:26.333 [2024-07-15 20:47:00.567739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf2050 (9): Bad file descriptor 00:21:26.334 [2024-07-15 20:47:00.567749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe98d0 (9): Bad file descriptor 00:21:26.334 [2024-07-15 20:47:00.567758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1dc70 (9): Bad file descriptor 00:21:26.334 [2024-07-15 20:47:00.567766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd28b0 (9): Bad file descriptor 00:21:26.334 [2024-07-15 20:47:00.567778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa61b30 (9): Bad file descriptor 00:21:26.334 [2024-07-15 20:47:00.567787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40190 (9): Bad file descriptor 00:21:26.334 [2024-07-15 20:47:00.567810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:26.334 [2024-07-15 20:47:00.567819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:26.334 [2024-07-15 20:47:00.567825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:26.334 [2024-07-15 20:47:00.567834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:26.334 [2024-07-15 20:47:00.567840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:26.334 [2024-07-15 20:47:00.567846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:26.334 [2024-07-15 20:47:00.567855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.334 [2024-07-15 20:47:00.567862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.334 [2024-07-15 20:47:00.567868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.334 [2024-07-15 20:47:00.567876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:26.334 [2024-07-15 20:47:00.567882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:26.334 [2024-07-15 20:47:00.567888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:26.334 [2024-07-15 20:47:00.567897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:26.334 [2024-07-15 20:47:00.567904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:26.334 [2024-07-15 20:47:00.567911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:26.334 [2024-07-15 20:47:00.567919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:26.334 [2024-07-15 20:47:00.567925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:26.334 [2024-07-15 20:47:00.567932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:26.334 [2024-07-15 20:47:00.567957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.334 [2024-07-15 20:47:00.567965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.334 [2024-07-15 20:47:00.567970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.334 [2024-07-15 20:47:00.567976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.334 [2024-07-15 20:47:00.567982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.334 [2024-07-15 20:47:00.567987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.594 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:26.594 20:47:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2752137 00:21:27.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2752137) - No such process 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.527 rmmod nvme_tcp 00:21:27.527 rmmod nvme_fabrics 00:21:27.527 rmmod nvme_keyring 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.527 20:47:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.059 20:47:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.059 00:21:30.059 real 0m7.997s 00:21:30.059 user 0m20.349s 00:21:30.059 sys 0m1.313s 00:21:30.059 20:47:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:30.059 20:47:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.059 ************************************ 00:21:30.059 END TEST nvmf_shutdown_tc3 00:21:30.059 ************************************ 00:21:30.059 20:47:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:30.059 20:47:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:30.059 00:21:30.059 real 0m31.260s 00:21:30.059 user 1m19.067s 00:21:30.059 sys 0m8.123s 00:21:30.059 20:47:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:30.059 20:47:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:30.059 ************************************ 00:21:30.059 END TEST nvmf_shutdown 00:21:30.059 ************************************ 00:21:30.059 20:47:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:30.059 20:47:04 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:30.059 20:47:04 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.059 20:47:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.059 20:47:04 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:30.059 20:47:04 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.059 20:47:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.059 20:47:04 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:30.059 20:47:04 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:30.059 20:47:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:30.059 20:47:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.059 20:47:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.059 ************************************ 00:21:30.059 START TEST nvmf_multicontroller 00:21:30.059 ************************************ 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:30.059 * Looking for test storage... 00:21:30.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.059 20:47:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.329 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.329 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.329 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.329 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.329 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:35.330 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:35.330 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:35.330 Found net devices under 0000:86:00.0: cvl_0_0 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:35.330 Found net devices under 0000:86:00.1: cvl_0_1 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.330 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:21:35.331 00:21:35.331 --- 10.0.0.2 ping statistics --- 00:21:35.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.331 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:21:35.331 00:21:35.331 --- 10.0.0.1 ping statistics --- 00:21:35.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.331 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.331 20:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2756392 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2756392 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2756392 ']' 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.589 20:47:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.589 [2024-07-15 20:47:09.865435] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:35.589 [2024-07-15 20:47:09.865478] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.589 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.589 [2024-07-15 20:47:09.923237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:35.589 [2024-07-15 20:47:10.000881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.589 [2024-07-15 20:47:10.000919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.589 [2024-07-15 20:47:10.000927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.589 [2024-07-15 20:47:10.000936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.589 [2024-07-15 20:47:10.000942] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.589 [2024-07-15 20:47:10.001043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.589 [2024-07-15 20:47:10.001128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.589 [2024-07-15 20:47:10.001129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 [2024-07-15 20:47:10.714946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 Malloc0 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 [2024-07-15 20:47:10.767443] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 [2024-07-15 20:47:10.775358] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 Malloc1 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2756510 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2756510 /var/tmp/bdevperf.sock 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2756510 ']' 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.532 20:47:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.466 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.466 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:37.466 20:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:37.466 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.466 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.466 NVMe0n1 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.467 1 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.467 request: 00:21:37.467 { 00:21:37.467 "name": "NVMe0", 00:21:37.467 "trtype": "tcp", 00:21:37.467 "traddr": "10.0.0.2", 00:21:37.467 "adrfam": "ipv4", 00:21:37.467 "trsvcid": "4420", 00:21:37.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.467 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:37.467 "hostaddr": "10.0.0.2", 00:21:37.467 "hostsvcid": "60000", 00:21:37.467 "prchk_reftag": false, 00:21:37.467 "prchk_guard": false, 00:21:37.467 "hdgst": false, 00:21:37.467 "ddgst": false, 00:21:37.467 "method": "bdev_nvme_attach_controller", 00:21:37.467 "req_id": 1 00:21:37.467 } 00:21:37.467 Got JSON-RPC error response 00:21:37.467 response: 00:21:37.467 { 00:21:37.467 "code": -114, 00:21:37.467 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:37.467 } 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.467 request: 00:21:37.467 { 00:21:37.467 "name": "NVMe0", 00:21:37.467 "trtype": "tcp", 00:21:37.467 "traddr": "10.0.0.2", 00:21:37.467 "adrfam": "ipv4", 00:21:37.467 "trsvcid": "4420", 00:21:37.467 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:37.467 "hostaddr": "10.0.0.2", 00:21:37.467 "hostsvcid": "60000", 00:21:37.467 "prchk_reftag": false, 00:21:37.467 "prchk_guard": false, 00:21:37.467 "hdgst": false, 00:21:37.467 "ddgst": false, 00:21:37.467 "method": "bdev_nvme_attach_controller", 00:21:37.467 "req_id": 1 00:21:37.467 } 00:21:37.467 Got JSON-RPC error response 00:21:37.467 response: 00:21:37.467 { 00:21:37.467 "code": -114, 00:21:37.467 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:37.467 } 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.467 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.726 request: 00:21:37.726 { 00:21:37.726 "name": "NVMe0", 00:21:37.726 "trtype": "tcp", 00:21:37.726 "traddr": "10.0.0.2", 00:21:37.726 "adrfam": "ipv4", 00:21:37.726 "trsvcid": "4420", 00:21:37.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.726 "hostaddr": "10.0.0.2", 00:21:37.726 "hostsvcid": "60000", 00:21:37.726 "prchk_reftag": false, 00:21:37.726 "prchk_guard": false, 00:21:37.726 "hdgst": false, 00:21:37.726 "ddgst": false, 00:21:37.726 "multipath": "disable", 00:21:37.726 "method": "bdev_nvme_attach_controller", 00:21:37.726 "req_id": 1 00:21:37.726 } 00:21:37.726 Got JSON-RPC error response 00:21:37.726 response: 00:21:37.726 { 00:21:37.726 "code": -114, 00:21:37.726 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:37.726 } 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.726 request: 00:21:37.726 { 00:21:37.726 "name": "NVMe0", 00:21:37.726 "trtype": "tcp", 00:21:37.726 "traddr": "10.0.0.2", 00:21:37.726 "adrfam": "ipv4", 00:21:37.726 "trsvcid": "4420", 00:21:37.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.726 "hostaddr": "10.0.0.2", 00:21:37.726 "hostsvcid": "60000", 00:21:37.726 "prchk_reftag": false, 00:21:37.726 "prchk_guard": false, 00:21:37.726 "hdgst": false, 00:21:37.726 "ddgst": false, 00:21:37.726 "multipath": "failover", 00:21:37.726 "method": "bdev_nvme_attach_controller", 00:21:37.726 "req_id": 1 00:21:37.726 } 00:21:37.726 Got JSON-RPC error response 00:21:37.726 response: 00:21:37.726 { 00:21:37.726 "code": -114, 00:21:37.726 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:37.726 } 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:37.726 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:37.727 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.727 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.727 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.727 20:47:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.727 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.727 20:47:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.727 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.727 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:37.727 20:47:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.102 0 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2756510 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2756510 ']' 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2756510 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756510 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:39.102 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756510' 00:21:39.103 killing process with pid 2756510 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2756510 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2756510 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:39.103 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:39.103 [2024-07-15 20:47:10.875729] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:39.103 [2024-07-15 20:47:10.875778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756510 ] 00:21:39.103 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.103 [2024-07-15 20:47:10.930966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.103 [2024-07-15 20:47:11.005929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.103 [2024-07-15 20:47:12.163335] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name be5130ff-4634-4be0-b533-3fe87a192ec0 already exists 00:21:39.103 [2024-07-15 20:47:12.163362] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:be5130ff-4634-4be0-b533-3fe87a192ec0 alias for bdev NVMe1n1 00:21:39.103 [2024-07-15 20:47:12.163370] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:39.103 Running I/O for 1 seconds... 00:21:39.103 00:21:39.103 Latency(us) 00:21:39.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.103 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:39.103 NVMe0n1 : 1.01 24442.43 95.48 0.00 0.00 5230.74 3148.58 9516.97 00:21:39.103 =================================================================================================================== 00:21:39.103 Total : 24442.43 95.48 0.00 0.00 5230.74 3148.58 9516.97 00:21:39.103 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.103 00:21:39.103 Latency(us) 00:21:39.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.103 =================================================================================================================== 00:21:39.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.103 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:39.103 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.363 rmmod nvme_tcp 00:21:39.363 rmmod nvme_fabrics 00:21:39.363 rmmod nvme_keyring 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2756392 ']' 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2756392 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2756392 ']' 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2756392 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756392 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756392' 00:21:39.363 killing process with pid 2756392 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2756392 00:21:39.363 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2756392 00:21:39.622 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:39.622 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:39.622 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:39.622 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.622 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:39.622 20:47:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.622 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.622 20:47:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.598 20:47:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:41.598 00:21:41.598 real 0m11.783s 00:21:41.598 user 0m16.282s 00:21:41.598 sys 0m4.813s 00:21:41.598 20:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.598 20:47:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.598 ************************************ 00:21:41.598 END TEST nvmf_multicontroller 00:21:41.598 ************************************ 00:21:41.598 20:47:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:41.598 20:47:16 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:41.598 20:47:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:41.598 20:47:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.598 20:47:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.598 ************************************ 00:21:41.598 START TEST nvmf_aer 00:21:41.598 ************************************ 00:21:41.598 20:47:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:41.857 * Looking for test storage... 00:21:41.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.857 20:47:16 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.858 20:47:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.128 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.128 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.128 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.128 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:47.128 00:21:47.128 --- 10.0.0.2 ping statistics --- 00:21:47.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.128 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:47.128 00:21:47.128 --- 10.0.0.1 ping statistics --- 00:21:47.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.128 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2760420 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2760420 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2760420 ']' 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.128 20:47:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.128 [2024-07-15 20:47:21.420986] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:47.128 [2024-07-15 20:47:21.421032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.128 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.128 [2024-07-15 20:47:21.476289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.128 [2024-07-15 20:47:21.557415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.128 [2024-07-15 20:47:21.557449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.128 [2024-07-15 20:47:21.557456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.128 [2024-07-15 20:47:21.557462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.128 [2024-07-15 20:47:21.557467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.128 [2024-07-15 20:47:21.557508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.128 [2024-07-15 20:47:21.557605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.128 [2024-07-15 20:47:21.557689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.128 [2024-07-15 20:47:21.557691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.061 [2024-07-15 20:47:22.282342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.061 Malloc0 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.061 [2024-07-15 20:47:22.334058] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.061 [ 00:21:48.061 { 00:21:48.061 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:48.061 "subtype": "Discovery", 00:21:48.061 "listen_addresses": [], 00:21:48.061 "allow_any_host": true, 00:21:48.061 "hosts": [] 00:21:48.061 }, 00:21:48.061 { 00:21:48.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.061 "subtype": "NVMe", 00:21:48.061 "listen_addresses": [ 00:21:48.061 { 00:21:48.061 "trtype": "TCP", 00:21:48.061 "adrfam": "IPv4", 00:21:48.061 "traddr": "10.0.0.2", 00:21:48.061 "trsvcid": "4420" 00:21:48.061 } 00:21:48.061 ], 00:21:48.061 "allow_any_host": true, 00:21:48.061 "hosts": [], 00:21:48.061 "serial_number": "SPDK00000000000001", 00:21:48.061 "model_number": "SPDK bdev Controller", 00:21:48.061 "max_namespaces": 2, 00:21:48.061 "min_cntlid": 1, 00:21:48.061 "max_cntlid": 65519, 00:21:48.061 "namespaces": [ 00:21:48.061 { 00:21:48.061 "nsid": 1, 00:21:48.061 "bdev_name": "Malloc0", 00:21:48.061 "name": "Malloc0", 00:21:48.061 "nguid": "F9E6E9AB23F94074A092080BC4A3CD8C", 00:21:48.061 "uuid": "f9e6e9ab-23f9-4074-a092-080bc4a3cd8c" 00:21:48.061 } 00:21:48.061 ] 00:21:48.061 } 00:21:48.061 ] 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2760664 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:48.061 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:48.062 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:48.062 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.062 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.062 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:48.062 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:48.062 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.320 Malloc1 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.320 Asynchronous Event Request test 00:21:48.320 Attaching to 10.0.0.2 00:21:48.320 Attached to 10.0.0.2 00:21:48.320 Registering asynchronous event callbacks... 00:21:48.320 Starting namespace attribute notice tests for all controllers... 00:21:48.320 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:48.320 aer_cb - Changed Namespace 00:21:48.320 Cleaning up... 00:21:48.320 [ 00:21:48.320 { 00:21:48.320 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:48.320 "subtype": "Discovery", 00:21:48.320 "listen_addresses": [], 00:21:48.320 "allow_any_host": true, 00:21:48.320 "hosts": [] 00:21:48.320 }, 00:21:48.320 { 00:21:48.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.320 "subtype": "NVMe", 00:21:48.320 "listen_addresses": [ 00:21:48.320 { 00:21:48.320 "trtype": "TCP", 00:21:48.320 "adrfam": "IPv4", 00:21:48.320 "traddr": "10.0.0.2", 00:21:48.320 "trsvcid": "4420" 00:21:48.320 } 00:21:48.320 ], 00:21:48.320 "allow_any_host": true, 00:21:48.320 "hosts": [], 00:21:48.320 "serial_number": "SPDK00000000000001", 00:21:48.320 "model_number": "SPDK bdev Controller", 00:21:48.320 "max_namespaces": 2, 00:21:48.320 "min_cntlid": 1, 00:21:48.320 "max_cntlid": 65519, 00:21:48.320 "namespaces": [ 00:21:48.320 { 00:21:48.320 "nsid": 1, 00:21:48.320 "bdev_name": "Malloc0", 00:21:48.320 "name": "Malloc0", 00:21:48.320 "nguid": "F9E6E9AB23F94074A092080BC4A3CD8C", 00:21:48.320 "uuid": "f9e6e9ab-23f9-4074-a092-080bc4a3cd8c" 00:21:48.320 }, 00:21:48.320 { 00:21:48.320 "nsid": 2, 00:21:48.320 "bdev_name": "Malloc1", 00:21:48.320 "name": "Malloc1", 00:21:48.320 "nguid": "D682035315B344FB8D693AB231BDABCF", 00:21:48.320 "uuid": "d6820353-15b3-44fb-8d69-3ab231bdabcf" 00:21:48.320 } 00:21:48.320 ] 00:21:48.320 } 00:21:48.320 ] 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2760664 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.320 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.321 rmmod nvme_tcp 00:21:48.321 rmmod nvme_fabrics 00:21:48.321 rmmod nvme_keyring 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2760420 ']' 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2760420 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2760420 ']' 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2760420 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.321 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2760420 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2760420' 00:21:48.579 killing process with pid 2760420 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2760420 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2760420 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.579 20:47:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.113 20:47:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.113 00:21:51.113 real 0m9.001s 00:21:51.113 user 0m7.186s 00:21:51.113 sys 0m4.344s 00:21:51.113 20:47:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:51.113 20:47:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.113 ************************************ 00:21:51.113 END TEST nvmf_aer 00:21:51.113 ************************************ 00:21:51.113 20:47:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:51.113 20:47:25 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:51.113 20:47:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:51.113 20:47:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.113 20:47:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.113 ************************************ 00:21:51.113 START TEST nvmf_async_init 00:21:51.113 ************************************ 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:51.113 * Looking for test storage... 00:21:51.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b1255e4077964d09a99df3f976530ff8 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.113 20:47:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:56.385 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:56.385 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:56.385 Found net devices under 0000:86:00.0: cvl_0_0 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:56.385 Found net devices under 0000:86:00.1: cvl_0_1 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:21:56.385 00:21:56.385 --- 10.0.0.2 ping statistics --- 00:21:56.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.385 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:21:56.385 00:21:56.385 --- 10.0.0.1 ping statistics --- 00:21:56.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.385 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.385 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2764104 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2764104 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2764104 ']' 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.386 20:47:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.386 [2024-07-15 20:47:30.645800] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:21:56.386 [2024-07-15 20:47:30.645846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.386 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.386 [2024-07-15 20:47:30.704585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.386 [2024-07-15 20:47:30.778556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.386 [2024-07-15 20:47:30.778598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.386 [2024-07-15 20:47:30.778605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.386 [2024-07-15 20:47:30.778611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.386 [2024-07-15 20:47:30.778616] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.386 [2024-07-15 20:47:30.778640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 [2024-07-15 20:47:31.480820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 null0 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b1255e4077964d09a99df3f976530ff8 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 [2024-07-15 20:47:31.521024] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 nvme0n1 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.322 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.322 [ 00:21:57.323 { 00:21:57.323 "name": "nvme0n1", 00:21:57.323 "aliases": [ 00:21:57.323 "b1255e40-7796-4d09-a99d-f3f976530ff8" 00:21:57.323 ], 00:21:57.323 "product_name": "NVMe disk", 00:21:57.323 "block_size": 512, 00:21:57.323 "num_blocks": 2097152, 00:21:57.323 "uuid": "b1255e40-7796-4d09-a99d-f3f976530ff8", 00:21:57.323 "assigned_rate_limits": { 00:21:57.323 "rw_ios_per_sec": 0, 00:21:57.323 "rw_mbytes_per_sec": 0, 00:21:57.323 "r_mbytes_per_sec": 0, 00:21:57.323 "w_mbytes_per_sec": 0 00:21:57.323 }, 00:21:57.323 "claimed": false, 00:21:57.323 "zoned": false, 00:21:57.323 "supported_io_types": { 00:21:57.323 "read": true, 00:21:57.323 "write": true, 00:21:57.323 "unmap": false, 00:21:57.323 "flush": true, 00:21:57.323 "reset": true, 00:21:57.323 "nvme_admin": true, 00:21:57.323 "nvme_io": true, 00:21:57.323 "nvme_io_md": false, 00:21:57.323 "write_zeroes": true, 00:21:57.323 "zcopy": false, 00:21:57.323 "get_zone_info": false, 00:21:57.323 "zone_management": false, 00:21:57.323 "zone_append": false, 00:21:57.323 "compare": true, 00:21:57.323 "compare_and_write": true, 00:21:57.323 "abort": true, 00:21:57.323 "seek_hole": false, 00:21:57.323 "seek_data": false, 00:21:57.323 "copy": true, 00:21:57.323 "nvme_iov_md": false 00:21:57.323 }, 00:21:57.323 "memory_domains": [ 00:21:57.323 { 00:21:57.323 "dma_device_id": "system", 00:21:57.323 "dma_device_type": 1 00:21:57.323 } 00:21:57.323 ], 00:21:57.323 "driver_specific": { 00:21:57.323 "nvme": [ 00:21:57.323 { 00:21:57.323 "trid": { 00:21:57.323 "trtype": "TCP", 00:21:57.323 "adrfam": "IPv4", 00:21:57.323 "traddr": "10.0.0.2", 00:21:57.323 "trsvcid": "4420", 00:21:57.323 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.323 }, 00:21:57.323 "ctrlr_data": { 00:21:57.323 "cntlid": 1, 00:21:57.323 "vendor_id": "0x8086", 00:21:57.323 "model_number": "SPDK bdev Controller", 00:21:57.323 "serial_number": "00000000000000000000", 00:21:57.323 "firmware_revision": "24.09", 00:21:57.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.323 "oacs": { 00:21:57.323 "security": 0, 00:21:57.323 "format": 0, 00:21:57.323 "firmware": 0, 00:21:57.323 "ns_manage": 0 00:21:57.323 }, 00:21:57.323 "multi_ctrlr": true, 00:21:57.323 "ana_reporting": false 00:21:57.323 }, 00:21:57.323 "vs": { 00:21:57.323 "nvme_version": "1.3" 00:21:57.323 }, 00:21:57.323 "ns_data": { 00:21:57.323 "id": 1, 00:21:57.323 "can_share": true 00:21:57.323 } 00:21:57.323 } 00:21:57.323 ], 00:21:57.323 "mp_policy": "active_passive" 00:21:57.323 } 00:21:57.323 } 00:21:57.323 ] 00:21:57.323 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.323 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:57.323 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.323 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.323 [2024-07-15 20:47:31.777562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.323 [2024-07-15 20:47:31.777617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1877250 (9): Bad file descriptor 00:21:57.582 [2024-07-15 20:47:31.909307] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.582 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.582 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.582 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.582 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.582 [ 00:21:57.582 { 00:21:57.582 "name": "nvme0n1", 00:21:57.582 "aliases": [ 00:21:57.582 "b1255e40-7796-4d09-a99d-f3f976530ff8" 00:21:57.582 ], 00:21:57.582 "product_name": "NVMe disk", 00:21:57.582 "block_size": 512, 00:21:57.582 "num_blocks": 2097152, 00:21:57.582 "uuid": "b1255e40-7796-4d09-a99d-f3f976530ff8", 00:21:57.582 "assigned_rate_limits": { 00:21:57.582 "rw_ios_per_sec": 0, 00:21:57.582 "rw_mbytes_per_sec": 0, 00:21:57.582 "r_mbytes_per_sec": 0, 00:21:57.582 "w_mbytes_per_sec": 0 00:21:57.582 }, 00:21:57.582 "claimed": false, 00:21:57.582 "zoned": false, 00:21:57.582 "supported_io_types": { 00:21:57.582 "read": true, 00:21:57.582 "write": true, 00:21:57.582 "unmap": false, 00:21:57.582 "flush": true, 00:21:57.582 "reset": true, 00:21:57.582 "nvme_admin": true, 00:21:57.582 "nvme_io": true, 00:21:57.582 "nvme_io_md": false, 00:21:57.582 "write_zeroes": true, 00:21:57.582 "zcopy": false, 00:21:57.582 "get_zone_info": false, 00:21:57.582 "zone_management": false, 00:21:57.582 "zone_append": false, 00:21:57.582 "compare": true, 00:21:57.582 "compare_and_write": true, 00:21:57.582 "abort": true, 00:21:57.582 "seek_hole": false, 00:21:57.582 "seek_data": false, 00:21:57.582 "copy": true, 00:21:57.582 "nvme_iov_md": false 00:21:57.582 }, 00:21:57.582 "memory_domains": [ 00:21:57.582 { 00:21:57.582 "dma_device_id": "system", 00:21:57.582 "dma_device_type": 1 00:21:57.582 } 00:21:57.582 ], 00:21:57.582 "driver_specific": { 00:21:57.582 "nvme": [ 00:21:57.582 { 00:21:57.582 "trid": { 00:21:57.582 "trtype": "TCP", 00:21:57.582 "adrfam": "IPv4", 00:21:57.582 "traddr": "10.0.0.2", 00:21:57.582 "trsvcid": "4420", 00:21:57.582 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.582 }, 00:21:57.582 "ctrlr_data": { 00:21:57.582 "cntlid": 2, 00:21:57.582 "vendor_id": "0x8086", 00:21:57.582 "model_number": "SPDK bdev Controller", 00:21:57.582 "serial_number": "00000000000000000000", 00:21:57.582 "firmware_revision": "24.09", 00:21:57.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.582 "oacs": { 00:21:57.582 "security": 0, 00:21:57.582 "format": 0, 00:21:57.582 "firmware": 0, 00:21:57.582 "ns_manage": 0 00:21:57.582 }, 00:21:57.582 "multi_ctrlr": true, 00:21:57.582 "ana_reporting": false 00:21:57.582 }, 00:21:57.582 "vs": { 00:21:57.582 "nvme_version": "1.3" 00:21:57.582 }, 00:21:57.582 "ns_data": { 00:21:57.582 "id": 1, 00:21:57.582 "can_share": true 00:21:57.582 } 00:21:57.582 } 00:21:57.582 ], 00:21:57.582 "mp_policy": "active_passive" 00:21:57.582 } 00:21:57.582 } 00:21:57.582 ] 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QAAmXGh6EB 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QAAmXGh6EB 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.583 [2024-07-15 20:47:31.970160] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.583 [2024-07-15 20:47:31.970260] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QAAmXGh6EB 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.583 [2024-07-15 20:47:31.978174] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QAAmXGh6EB 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.583 20:47:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.583 [2024-07-15 20:47:31.986209] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.583 [2024-07-15 20:47:31.986267] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:57.583 nvme0n1 00:21:57.583 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.583 20:47:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.583 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.583 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.583 [ 00:21:57.583 { 00:21:57.583 "name": "nvme0n1", 00:21:57.583 "aliases": [ 00:21:57.583 "b1255e40-7796-4d09-a99d-f3f976530ff8" 00:21:57.583 ], 00:21:57.583 "product_name": "NVMe disk", 00:21:57.583 "block_size": 512, 00:21:57.583 "num_blocks": 2097152, 00:21:57.583 "uuid": "b1255e40-7796-4d09-a99d-f3f976530ff8", 00:21:57.583 "assigned_rate_limits": { 00:21:57.583 "rw_ios_per_sec": 0, 00:21:57.842 "rw_mbytes_per_sec": 0, 00:21:57.842 "r_mbytes_per_sec": 0, 00:21:57.842 "w_mbytes_per_sec": 0 00:21:57.842 }, 00:21:57.842 "claimed": false, 00:21:57.842 "zoned": false, 00:21:57.842 "supported_io_types": { 00:21:57.842 "read": true, 00:21:57.842 "write": true, 00:21:57.842 "unmap": false, 00:21:57.842 "flush": true, 00:21:57.842 "reset": true, 00:21:57.842 "nvme_admin": true, 00:21:57.842 "nvme_io": true, 00:21:57.842 "nvme_io_md": false, 00:21:57.842 "write_zeroes": true, 00:21:57.842 "zcopy": false, 00:21:57.842 "get_zone_info": false, 00:21:57.842 "zone_management": false, 00:21:57.842 "zone_append": false, 00:21:57.842 "compare": true, 00:21:57.842 "compare_and_write": true, 00:21:57.842 "abort": true, 00:21:57.842 "seek_hole": false, 00:21:57.842 "seek_data": false, 00:21:57.842 "copy": true, 00:21:57.842 "nvme_iov_md": false 00:21:57.842 }, 00:21:57.842 "memory_domains": [ 00:21:57.842 { 00:21:57.842 "dma_device_id": "system", 00:21:57.842 "dma_device_type": 1 00:21:57.842 } 00:21:57.842 ], 00:21:57.842 "driver_specific": { 00:21:57.842 "nvme": [ 00:21:57.842 { 00:21:57.842 "trid": { 00:21:57.842 "trtype": "TCP", 00:21:57.842 "adrfam": "IPv4", 00:21:57.842 "traddr": "10.0.0.2", 00:21:57.842 "trsvcid": "4421", 00:21:57.842 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.842 }, 00:21:57.842 "ctrlr_data": { 00:21:57.842 "cntlid": 3, 00:21:57.842 "vendor_id": "0x8086", 00:21:57.842 "model_number": "SPDK bdev Controller", 00:21:57.842 "serial_number": "00000000000000000000", 00:21:57.842 "firmware_revision": "24.09", 00:21:57.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.842 "oacs": { 00:21:57.842 "security": 0, 00:21:57.842 "format": 0, 00:21:57.842 "firmware": 0, 00:21:57.842 "ns_manage": 0 00:21:57.842 }, 00:21:57.842 "multi_ctrlr": true, 00:21:57.842 "ana_reporting": false 00:21:57.842 }, 00:21:57.842 "vs": { 00:21:57.842 "nvme_version": "1.3" 00:21:57.842 }, 00:21:57.842 "ns_data": { 00:21:57.842 "id": 1, 00:21:57.842 "can_share": true 00:21:57.842 } 00:21:57.842 } 00:21:57.842 ], 00:21:57.842 "mp_policy": "active_passive" 00:21:57.842 } 00:21:57.842 } 00:21:57.842 ] 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.QAAmXGh6EB 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.842 rmmod nvme_tcp 00:21:57.842 rmmod nvme_fabrics 00:21:57.842 rmmod nvme_keyring 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2764104 ']' 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2764104 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2764104 ']' 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2764104 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2764104 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2764104' 00:21:57.842 killing process with pid 2764104 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2764104 00:21:57.842 [2024-07-15 20:47:32.198671] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:57.842 [2024-07-15 20:47:32.198696] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:57.842 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2764104 00:21:58.101 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.101 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.101 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.101 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.101 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.101 20:47:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.101 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.101 20:47:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.005 20:47:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.005 00:22:00.005 real 0m9.308s 00:22:00.005 user 0m3.473s 00:22:00.005 sys 0m4.368s 00:22:00.005 20:47:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.005 20:47:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 ************************************ 00:22:00.005 END TEST nvmf_async_init 00:22:00.005 ************************************ 00:22:00.005 20:47:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:00.005 20:47:34 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:00.005 20:47:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:00.005 20:47:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.005 20:47:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.264 ************************************ 00:22:00.264 START TEST dma 00:22:00.264 ************************************ 00:22:00.264 20:47:34 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:00.264 * Looking for test storage... 00:22:00.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:00.264 20:47:34 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.264 20:47:34 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.264 20:47:34 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.264 20:47:34 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.264 20:47:34 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.264 20:47:34 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.264 20:47:34 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.264 20:47:34 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:00.264 20:47:34 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.264 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.265 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.265 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.265 20:47:34 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.265 20:47:34 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:00.265 20:47:34 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:00.265 00:22:00.265 real 0m0.107s 00:22:00.265 user 0m0.053s 00:22:00.265 sys 0m0.061s 00:22:00.265 20:47:34 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.265 20:47:34 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:00.265 ************************************ 00:22:00.265 END TEST dma 00:22:00.265 ************************************ 00:22:00.265 20:47:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:00.265 20:47:34 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:00.265 20:47:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:00.265 20:47:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.265 20:47:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.265 ************************************ 00:22:00.265 START TEST nvmf_identify 00:22:00.265 ************************************ 00:22:00.265 20:47:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:00.524 * Looking for test storage... 00:22:00.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.524 20:47:34 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.525 20:47:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:05.797 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:05.797 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:05.797 Found net devices under 0000:86:00.0: cvl_0_0 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:05.797 Found net devices under 0000:86:00.1: cvl_0_1 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.797 20:47:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:22:05.797 00:22:05.797 --- 10.0.0.2 ping statistics --- 00:22:05.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.797 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:22:05.797 00:22:05.797 --- 10.0.0.1 ping statistics --- 00:22:05.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.797 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2767775 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2767775 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2767775 ']' 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.797 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.797 [2024-07-15 20:47:40.122391] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:05.797 [2024-07-15 20:47:40.122437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.797 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.798 [2024-07-15 20:47:40.184085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.798 [2024-07-15 20:47:40.269920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.798 [2024-07-15 20:47:40.269958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.798 [2024-07-15 20:47:40.269965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.798 [2024-07-15 20:47:40.269971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.798 [2024-07-15 20:47:40.269977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.798 [2024-07-15 20:47:40.270018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.798 [2024-07-15 20:47:40.270103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.798 [2024-07-15 20:47:40.270202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.798 [2024-07-15 20:47:40.270203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 [2024-07-15 20:47:40.922982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 Malloc0 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 20:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 [2024-07-15 20:47:41.011037] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 [ 00:22:06.732 { 00:22:06.732 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:06.732 "subtype": "Discovery", 00:22:06.732 "listen_addresses": [ 00:22:06.732 { 00:22:06.732 "trtype": "TCP", 00:22:06.732 "adrfam": "IPv4", 00:22:06.732 "traddr": "10.0.0.2", 00:22:06.732 "trsvcid": "4420" 00:22:06.732 } 00:22:06.732 ], 00:22:06.732 "allow_any_host": true, 00:22:06.732 "hosts": [] 00:22:06.732 }, 00:22:06.732 { 00:22:06.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.732 "subtype": "NVMe", 00:22:06.732 "listen_addresses": [ 00:22:06.732 { 00:22:06.732 "trtype": "TCP", 00:22:06.732 "adrfam": "IPv4", 00:22:06.732 "traddr": "10.0.0.2", 00:22:06.732 "trsvcid": "4420" 00:22:06.732 } 00:22:06.732 ], 00:22:06.732 "allow_any_host": true, 00:22:06.732 "hosts": [], 00:22:06.732 "serial_number": "SPDK00000000000001", 00:22:06.732 "model_number": "SPDK bdev Controller", 00:22:06.732 "max_namespaces": 32, 00:22:06.732 "min_cntlid": 1, 00:22:06.732 "max_cntlid": 65519, 00:22:06.732 "namespaces": [ 00:22:06.732 { 00:22:06.732 "nsid": 1, 00:22:06.732 "bdev_name": "Malloc0", 00:22:06.732 "name": "Malloc0", 00:22:06.732 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:06.732 "eui64": "ABCDEF0123456789", 00:22:06.732 "uuid": "9ba3b1c6-c515-44cf-bb70-b339824fa5d3" 00:22:06.732 } 00:22:06.732 ] 00:22:06.732 } 00:22:06.732 ] 00:22:06.732 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.733 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:06.733 [2024-07-15 20:47:41.062574] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:06.733 [2024-07-15 20:47:41.062607] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768020 ] 00:22:06.733 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.733 [2024-07-15 20:47:41.091768] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:06.733 [2024-07-15 20:47:41.091814] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:06.733 [2024-07-15 20:47:41.091819] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:06.733 [2024-07-15 20:47:41.091829] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:06.733 [2024-07-15 20:47:41.091835] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:06.733 [2024-07-15 20:47:41.092135] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:06.733 [2024-07-15 20:47:41.092163] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x49aec0 0 00:22:06.733 [2024-07-15 20:47:41.106234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:06.733 [2024-07-15 20:47:41.106248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:06.733 [2024-07-15 20:47:41.106252] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:06.733 [2024-07-15 20:47:41.106255] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:06.733 [2024-07-15 20:47:41.106291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.106296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.106300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.733 [2024-07-15 20:47:41.106311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:06.733 [2024-07-15 20:47:41.106328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.733 [2024-07-15 20:47:41.114232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.733 [2024-07-15 20:47:41.114240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.733 [2024-07-15 20:47:41.114244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.733 [2024-07-15 20:47:41.114257] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:06.733 [2024-07-15 20:47:41.114263] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:06.733 [2024-07-15 20:47:41.114268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:06.733 [2024-07-15 20:47:41.114280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.733 [2024-07-15 20:47:41.114294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.733 [2024-07-15 20:47:41.114307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.733 [2024-07-15 20:47:41.114505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.733 [2024-07-15 20:47:41.114511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.733 [2024-07-15 20:47:41.114514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.733 [2024-07-15 20:47:41.114522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:06.733 [2024-07-15 20:47:41.114533] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:06.733 [2024-07-15 20:47:41.114540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.733 [2024-07-15 20:47:41.114552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.733 [2024-07-15 20:47:41.114563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.733 [2024-07-15 20:47:41.114687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.733 [2024-07-15 20:47:41.114693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.733 [2024-07-15 20:47:41.114696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114699] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.733 [2024-07-15 20:47:41.114704] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:06.733 [2024-07-15 20:47:41.114711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:06.733 [2024-07-15 20:47:41.114717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.733 [2024-07-15 20:47:41.114729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.733 [2024-07-15 20:47:41.114738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.733 [2024-07-15 20:47:41.114836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.733 [2024-07-15 20:47:41.114842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.733 [2024-07-15 20:47:41.114845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.733 [2024-07-15 20:47:41.114853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:06.733 [2024-07-15 20:47:41.114860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.733 [2024-07-15 20:47:41.114873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.733 [2024-07-15 20:47:41.114882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.733 [2024-07-15 20:47:41.114958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.733 [2024-07-15 20:47:41.114964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.733 [2024-07-15 20:47:41.114966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.114970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.733 [2024-07-15 20:47:41.114974] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:06.733 [2024-07-15 20:47:41.114978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:06.733 [2024-07-15 20:47:41.114985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:06.733 [2024-07-15 20:47:41.115092] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:06.733 [2024-07-15 20:47:41.115096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:06.733 [2024-07-15 20:47:41.115103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.115107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.115110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.733 [2024-07-15 20:47:41.115116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.733 [2024-07-15 20:47:41.115125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.733 [2024-07-15 20:47:41.115250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.733 [2024-07-15 20:47:41.115256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.733 [2024-07-15 20:47:41.115259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.115262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.733 [2024-07-15 20:47:41.115266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:06.733 [2024-07-15 20:47:41.115274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.115277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.115281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.733 [2024-07-15 20:47:41.115286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.733 [2024-07-15 20:47:41.115296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.733 [2024-07-15 20:47:41.115391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.733 [2024-07-15 20:47:41.115396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.733 [2024-07-15 20:47:41.115399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.115402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.733 [2024-07-15 20:47:41.115407] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:06.733 [2024-07-15 20:47:41.115410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:06.733 [2024-07-15 20:47:41.115417] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:06.733 [2024-07-15 20:47:41.115424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:06.733 [2024-07-15 20:47:41.115432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.115436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.733 [2024-07-15 20:47:41.115442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.733 [2024-07-15 20:47:41.115451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.733 [2024-07-15 20:47:41.115555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.733 [2024-07-15 20:47:41.115561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.733 [2024-07-15 20:47:41.115564] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.733 [2024-07-15 20:47:41.115569] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x49aec0): datao=0, datal=4096, cccid=0 00:22:06.734 [2024-07-15 20:47:41.115573] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51de40) on tqpair(0x49aec0): expected_datao=0, payload_size=4096 00:22:06.734 [2024-07-15 20:47:41.115577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.115623] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.115627] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.734 [2024-07-15 20:47:41.157385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.734 [2024-07-15 20:47:41.157388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.734 [2024-07-15 20:47:41.157399] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:06.734 [2024-07-15 20:47:41.157407] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:06.734 [2024-07-15 20:47:41.157411] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:06.734 [2024-07-15 20:47:41.157416] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:06.734 [2024-07-15 20:47:41.157420] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:06.734 [2024-07-15 20:47:41.157424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:06.734 [2024-07-15 20:47:41.157433] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:06.734 [2024-07-15 20:47:41.157440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.157454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.734 [2024-07-15 20:47:41.157465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.734 [2024-07-15 20:47:41.157550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.734 [2024-07-15 20:47:41.157556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.734 [2024-07-15 20:47:41.157559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.734 [2024-07-15 20:47:41.157569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.157580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.734 [2024-07-15 20:47:41.157585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.157597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.734 [2024-07-15 20:47:41.157602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.157615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.734 [2024-07-15 20:47:41.157620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.157631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.734 [2024-07-15 20:47:41.157635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:06.734 [2024-07-15 20:47:41.157645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:06.734 [2024-07-15 20:47:41.157652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.157661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.734 [2024-07-15 20:47:41.157671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51de40, cid 0, qid 0 00:22:06.734 [2024-07-15 20:47:41.157676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dfc0, cid 1, qid 0 00:22:06.734 [2024-07-15 20:47:41.157680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e140, cid 2, qid 0 00:22:06.734 [2024-07-15 20:47:41.157684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.734 [2024-07-15 20:47:41.157688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e440, cid 4, qid 0 00:22:06.734 [2024-07-15 20:47:41.157799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.734 [2024-07-15 20:47:41.157805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.734 [2024-07-15 20:47:41.157808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e440) on tqpair=0x49aec0 00:22:06.734 [2024-07-15 20:47:41.157815] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:06.734 [2024-07-15 20:47:41.157820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:06.734 [2024-07-15 20:47:41.157829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.157838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.734 [2024-07-15 20:47:41.157847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e440, cid 4, qid 0 00:22:06.734 [2024-07-15 20:47:41.157953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.734 [2024-07-15 20:47:41.157959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.734 [2024-07-15 20:47:41.157962] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157965] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x49aec0): datao=0, datal=4096, cccid=4 00:22:06.734 [2024-07-15 20:47:41.157969] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51e440) on tqpair(0x49aec0): expected_datao=0, payload_size=4096 00:22:06.734 [2024-07-15 20:47:41.157974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157980] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.157984] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.158008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.734 [2024-07-15 20:47:41.158013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.734 [2024-07-15 20:47:41.158016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.158019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e440) on tqpair=0x49aec0 00:22:06.734 [2024-07-15 20:47:41.158030] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:06.734 [2024-07-15 20:47:41.158050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.158054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.158060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.734 [2024-07-15 20:47:41.158066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.158069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.158072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.158077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.734 [2024-07-15 20:47:41.158090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e440, cid 4, qid 0 00:22:06.734 [2024-07-15 20:47:41.158095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e5c0, cid 5, qid 0 00:22:06.734 [2024-07-15 20:47:41.162232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.734 [2024-07-15 20:47:41.162240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.734 [2024-07-15 20:47:41.162243] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.162247] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x49aec0): datao=0, datal=1024, cccid=4 00:22:06.734 [2024-07-15 20:47:41.162251] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51e440) on tqpair(0x49aec0): expected_datao=0, payload_size=1024 00:22:06.734 [2024-07-15 20:47:41.162254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.162260] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.162263] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.162268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.734 [2024-07-15 20:47:41.162273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.734 [2024-07-15 20:47:41.162276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.162279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e5c0) on tqpair=0x49aec0 00:22:06.734 [2024-07-15 20:47:41.201230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.734 [2024-07-15 20:47:41.201239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.734 [2024-07-15 20:47:41.201242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.201245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e440) on tqpair=0x49aec0 00:22:06.734 [2024-07-15 20:47:41.201260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.201263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x49aec0) 00:22:06.734 [2024-07-15 20:47:41.201270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.734 [2024-07-15 20:47:41.201286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e440, cid 4, qid 0 00:22:06.734 [2024-07-15 20:47:41.201506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.734 [2024-07-15 20:47:41.201512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.734 [2024-07-15 20:47:41.201515] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.201518] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x49aec0): datao=0, datal=3072, cccid=4 00:22:06.734 [2024-07-15 20:47:41.201522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51e440) on tqpair(0x49aec0): expected_datao=0, payload_size=3072 00:22:06.734 [2024-07-15 20:47:41.201525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.734 [2024-07-15 20:47:41.201561] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.735 [2024-07-15 20:47:41.201564] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.997 [2024-07-15 20:47:41.243353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.997 [2024-07-15 20:47:41.243364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.997 [2024-07-15 20:47:41.243367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.997 [2024-07-15 20:47:41.243371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e440) on tqpair=0x49aec0 00:22:06.997 [2024-07-15 20:47:41.243380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.997 [2024-07-15 20:47:41.243384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x49aec0) 00:22:06.997 [2024-07-15 20:47:41.243390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.997 [2024-07-15 20:47:41.243406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e440, cid 4, qid 0 00:22:06.997 [2024-07-15 20:47:41.243496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.997 [2024-07-15 20:47:41.243502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.997 [2024-07-15 20:47:41.243505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.997 [2024-07-15 20:47:41.243508] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x49aec0): datao=0, datal=8, cccid=4 00:22:06.997 [2024-07-15 20:47:41.243512] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51e440) on tqpair(0x49aec0): expected_datao=0, payload_size=8 00:22:06.997 [2024-07-15 20:47:41.243516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.997 [2024-07-15 20:47:41.243522] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.997 [2024-07-15 20:47:41.243525] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.997 [2024-07-15 20:47:41.289236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.997 [2024-07-15 20:47:41.289246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.997 [2024-07-15 20:47:41.289249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.997 [2024-07-15 20:47:41.289253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e440) on tqpair=0x49aec0 00:22:06.997 ===================================================== 00:22:06.997 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:06.997 ===================================================== 00:22:06.997 Controller Capabilities/Features 00:22:06.997 ================================ 00:22:06.997 Vendor ID: 0000 00:22:06.997 Subsystem Vendor ID: 0000 00:22:06.997 Serial Number: .................... 00:22:06.997 Model Number: ........................................ 00:22:06.997 Firmware Version: 24.09 00:22:06.997 Recommended Arb Burst: 0 00:22:06.997 IEEE OUI Identifier: 00 00 00 00:22:06.997 Multi-path I/O 00:22:06.997 May have multiple subsystem ports: No 00:22:06.997 May have multiple controllers: No 00:22:06.997 Associated with SR-IOV VF: No 00:22:06.997 Max Data Transfer Size: 131072 00:22:06.997 Max Number of Namespaces: 0 00:22:06.997 Max Number of I/O Queues: 1024 00:22:06.997 NVMe Specification Version (VS): 1.3 00:22:06.997 NVMe Specification Version (Identify): 1.3 00:22:06.997 Maximum Queue Entries: 128 00:22:06.997 Contiguous Queues Required: Yes 00:22:06.997 Arbitration Mechanisms Supported 00:22:06.997 Weighted Round Robin: Not Supported 00:22:06.997 Vendor Specific: Not Supported 00:22:06.997 Reset Timeout: 15000 ms 00:22:06.997 Doorbell Stride: 4 bytes 00:22:06.997 NVM Subsystem Reset: Not Supported 00:22:06.997 Command Sets Supported 00:22:06.997 NVM Command Set: Supported 00:22:06.997 Boot Partition: Not Supported 00:22:06.997 Memory Page Size Minimum: 4096 bytes 00:22:06.997 Memory Page Size Maximum: 4096 bytes 00:22:06.997 Persistent Memory Region: Not Supported 00:22:06.997 Optional Asynchronous Events Supported 00:22:06.997 Namespace Attribute Notices: Not Supported 00:22:06.997 Firmware Activation Notices: Not Supported 00:22:06.997 ANA Change Notices: Not Supported 00:22:06.997 PLE Aggregate Log Change Notices: Not Supported 00:22:06.997 LBA Status Info Alert Notices: Not Supported 00:22:06.997 EGE Aggregate Log Change Notices: Not Supported 00:22:06.997 Normal NVM Subsystem Shutdown event: Not Supported 00:22:06.997 Zone Descriptor Change Notices: Not Supported 00:22:06.997 Discovery Log Change Notices: Supported 00:22:06.997 Controller Attributes 00:22:06.997 128-bit Host Identifier: Not Supported 00:22:06.997 Non-Operational Permissive Mode: Not Supported 00:22:06.997 NVM Sets: Not Supported 00:22:06.997 Read Recovery Levels: Not Supported 00:22:06.997 Endurance Groups: Not Supported 00:22:06.997 Predictable Latency Mode: Not Supported 00:22:06.997 Traffic Based Keep ALive: Not Supported 00:22:06.997 Namespace Granularity: Not Supported 00:22:06.997 SQ Associations: Not Supported 00:22:06.997 UUID List: Not Supported 00:22:06.997 Multi-Domain Subsystem: Not Supported 00:22:06.997 Fixed Capacity Management: Not Supported 00:22:06.997 Variable Capacity Management: Not Supported 00:22:06.997 Delete Endurance Group: Not Supported 00:22:06.997 Delete NVM Set: Not Supported 00:22:06.997 Extended LBA Formats Supported: Not Supported 00:22:06.997 Flexible Data Placement Supported: Not Supported 00:22:06.997 00:22:06.997 Controller Memory Buffer Support 00:22:06.997 ================================ 00:22:06.997 Supported: No 00:22:06.997 00:22:06.997 Persistent Memory Region Support 00:22:06.997 ================================ 00:22:06.997 Supported: No 00:22:06.997 00:22:06.997 Admin Command Set Attributes 00:22:06.997 ============================ 00:22:06.997 Security Send/Receive: Not Supported 00:22:06.997 Format NVM: Not Supported 00:22:06.997 Firmware Activate/Download: Not Supported 00:22:06.997 Namespace Management: Not Supported 00:22:06.997 Device Self-Test: Not Supported 00:22:06.997 Directives: Not Supported 00:22:06.997 NVMe-MI: Not Supported 00:22:06.997 Virtualization Management: Not Supported 00:22:06.997 Doorbell Buffer Config: Not Supported 00:22:06.997 Get LBA Status Capability: Not Supported 00:22:06.997 Command & Feature Lockdown Capability: Not Supported 00:22:06.997 Abort Command Limit: 1 00:22:06.997 Async Event Request Limit: 4 00:22:06.997 Number of Firmware Slots: N/A 00:22:06.997 Firmware Slot 1 Read-Only: N/A 00:22:06.997 Firmware Activation Without Reset: N/A 00:22:06.997 Multiple Update Detection Support: N/A 00:22:06.997 Firmware Update Granularity: No Information Provided 00:22:06.997 Per-Namespace SMART Log: No 00:22:06.997 Asymmetric Namespace Access Log Page: Not Supported 00:22:06.997 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:06.997 Command Effects Log Page: Not Supported 00:22:06.997 Get Log Page Extended Data: Supported 00:22:06.997 Telemetry Log Pages: Not Supported 00:22:06.997 Persistent Event Log Pages: Not Supported 00:22:06.997 Supported Log Pages Log Page: May Support 00:22:06.997 Commands Supported & Effects Log Page: Not Supported 00:22:06.997 Feature Identifiers & Effects Log Page:May Support 00:22:06.997 NVMe-MI Commands & Effects Log Page: May Support 00:22:06.997 Data Area 4 for Telemetry Log: Not Supported 00:22:06.997 Error Log Page Entries Supported: 128 00:22:06.997 Keep Alive: Not Supported 00:22:06.997 00:22:06.997 NVM Command Set Attributes 00:22:06.997 ========================== 00:22:06.997 Submission Queue Entry Size 00:22:06.997 Max: 1 00:22:06.997 Min: 1 00:22:06.997 Completion Queue Entry Size 00:22:06.997 Max: 1 00:22:06.997 Min: 1 00:22:06.997 Number of Namespaces: 0 00:22:06.997 Compare Command: Not Supported 00:22:06.997 Write Uncorrectable Command: Not Supported 00:22:06.997 Dataset Management Command: Not Supported 00:22:06.998 Write Zeroes Command: Not Supported 00:22:06.998 Set Features Save Field: Not Supported 00:22:06.998 Reservations: Not Supported 00:22:06.998 Timestamp: Not Supported 00:22:06.998 Copy: Not Supported 00:22:06.998 Volatile Write Cache: Not Present 00:22:06.998 Atomic Write Unit (Normal): 1 00:22:06.998 Atomic Write Unit (PFail): 1 00:22:06.998 Atomic Compare & Write Unit: 1 00:22:06.998 Fused Compare & Write: Supported 00:22:06.998 Scatter-Gather List 00:22:06.998 SGL Command Set: Supported 00:22:06.998 SGL Keyed: Supported 00:22:06.998 SGL Bit Bucket Descriptor: Not Supported 00:22:06.998 SGL Metadata Pointer: Not Supported 00:22:06.998 Oversized SGL: Not Supported 00:22:06.998 SGL Metadata Address: Not Supported 00:22:06.998 SGL Offset: Supported 00:22:06.998 Transport SGL Data Block: Not Supported 00:22:06.998 Replay Protected Memory Block: Not Supported 00:22:06.998 00:22:06.998 Firmware Slot Information 00:22:06.998 ========================= 00:22:06.998 Active slot: 0 00:22:06.998 00:22:06.998 00:22:06.998 Error Log 00:22:06.998 ========= 00:22:06.998 00:22:06.998 Active Namespaces 00:22:06.998 ================= 00:22:06.998 Discovery Log Page 00:22:06.998 ================== 00:22:06.998 Generation Counter: 2 00:22:06.998 Number of Records: 2 00:22:06.998 Record Format: 0 00:22:06.998 00:22:06.998 Discovery Log Entry 0 00:22:06.998 ---------------------- 00:22:06.998 Transport Type: 3 (TCP) 00:22:06.998 Address Family: 1 (IPv4) 00:22:06.998 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:06.998 Entry Flags: 00:22:06.998 Duplicate Returned Information: 1 00:22:06.998 Explicit Persistent Connection Support for Discovery: 1 00:22:06.998 Transport Requirements: 00:22:06.998 Secure Channel: Not Required 00:22:06.998 Port ID: 0 (0x0000) 00:22:06.998 Controller ID: 65535 (0xffff) 00:22:06.998 Admin Max SQ Size: 128 00:22:06.998 Transport Service Identifier: 4420 00:22:06.998 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:06.998 Transport Address: 10.0.0.2 00:22:06.998 Discovery Log Entry 1 00:22:06.998 ---------------------- 00:22:06.998 Transport Type: 3 (TCP) 00:22:06.998 Address Family: 1 (IPv4) 00:22:06.998 Subsystem Type: 2 (NVM Subsystem) 00:22:06.998 Entry Flags: 00:22:06.998 Duplicate Returned Information: 0 00:22:06.998 Explicit Persistent Connection Support for Discovery: 0 00:22:06.998 Transport Requirements: 00:22:06.998 Secure Channel: Not Required 00:22:06.998 Port ID: 0 (0x0000) 00:22:06.998 Controller ID: 65535 (0xffff) 00:22:06.998 Admin Max SQ Size: 128 00:22:06.998 Transport Service Identifier: 4420 00:22:06.998 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:06.998 Transport Address: 10.0.0.2 [2024-07-15 20:47:41.289334] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:06.998 [2024-07-15 20:47:41.289345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51de40) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.289350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.998 [2024-07-15 20:47:41.289355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51dfc0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.289359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.998 [2024-07-15 20:47:41.289363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e140) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.289367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.998 [2024-07-15 20:47:41.289372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.289377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.998 [2024-07-15 20:47:41.289386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.998 [2024-07-15 20:47:41.289400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.998 [2024-07-15 20:47:41.289412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.998 [2024-07-15 20:47:41.289521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.998 [2024-07-15 20:47:41.289528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.998 [2024-07-15 20:47:41.289531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.289540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.998 [2024-07-15 20:47:41.289552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.998 [2024-07-15 20:47:41.289565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.998 [2024-07-15 20:47:41.289670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.998 [2024-07-15 20:47:41.289676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.998 [2024-07-15 20:47:41.289680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.289687] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:06.998 [2024-07-15 20:47:41.289692] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:06.998 [2024-07-15 20:47:41.289701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.998 [2024-07-15 20:47:41.289714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.998 [2024-07-15 20:47:41.289723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.998 [2024-07-15 20:47:41.289796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.998 [2024-07-15 20:47:41.289802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.998 [2024-07-15 20:47:41.289805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.289817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.998 [2024-07-15 20:47:41.289830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.998 [2024-07-15 20:47:41.289839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.998 [2024-07-15 20:47:41.289914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.998 [2024-07-15 20:47:41.289920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.998 [2024-07-15 20:47:41.289923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.289934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.289941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.998 [2024-07-15 20:47:41.289946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.998 [2024-07-15 20:47:41.289955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.998 [2024-07-15 20:47:41.290081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.998 [2024-07-15 20:47:41.290087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.998 [2024-07-15 20:47:41.290090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.290093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.290102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.290105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.290108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.998 [2024-07-15 20:47:41.290114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.998 [2024-07-15 20:47:41.290124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.998 [2024-07-15 20:47:41.290201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.998 [2024-07-15 20:47:41.290206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.998 [2024-07-15 20:47:41.290209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.290212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.290220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.290230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.290234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.998 [2024-07-15 20:47:41.290240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.998 [2024-07-15 20:47:41.290249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.998 [2024-07-15 20:47:41.290327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.998 [2024-07-15 20:47:41.290333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.998 [2024-07-15 20:47:41.290336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.290339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.998 [2024-07-15 20:47:41.290347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.998 [2024-07-15 20:47:41.290350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.290359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.290368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.290445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.290453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.290456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.290467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.290479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.290488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.290564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.290570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.290572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.290584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.290596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.290604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.290680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.290685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.290688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.290699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.290711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.290720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.290794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.290799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.290802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.290813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.290825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.290834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.290914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.290920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.290924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.290935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.290942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.290948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.290956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.291031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.291037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.291040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.291051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.291063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.291072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.291152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.291158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.291161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.291172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.291184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.291192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.291271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.291277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.291279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.291291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.291303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.291312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.291385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.291391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.291394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.291409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.291421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.291430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.291508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.291514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.291517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.291528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.291540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.291549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.291624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.291630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.291632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.291643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.291656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.291664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.291735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.291741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.291744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:06.999 [2024-07-15 20:47:41.291755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:06.999 [2024-07-15 20:47:41.291767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.999 [2024-07-15 20:47:41.291776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:06.999 [2024-07-15 20:47:41.291852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.999 [2024-07-15 20:47:41.291858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.999 [2024-07-15 20:47:41.291861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.999 [2024-07-15 20:47:41.291864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.291874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.291878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.291881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.291886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.291895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.291973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.291979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.291982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.291985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.291993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.291997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.292088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.292095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.292098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.292109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.292206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.292213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.292216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.292233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.292331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.292338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.292341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.292353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.292458] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.292464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.292467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.292480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.292575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.292581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.292584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.292595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.292692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.292698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.292700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.292712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292715] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.292807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.292813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.292816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.292827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.292924] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.292930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.292932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.292944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.292951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.292956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.292965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.293041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.293047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.293050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.293053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.293061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.293065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.293068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.293073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.293082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.293158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.293164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.293167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.293170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.293178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.293182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.293185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.293191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.293199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.297232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.297241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.297244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.297247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.000 [2024-07-15 20:47:41.297258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.297262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.000 [2024-07-15 20:47:41.297265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x49aec0) 00:22:07.000 [2024-07-15 20:47:41.297271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.000 [2024-07-15 20:47:41.297286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e2c0, cid 3, qid 0 00:22:07.000 [2024-07-15 20:47:41.297429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.000 [2024-07-15 20:47:41.297435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.000 [2024-07-15 20:47:41.297438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.297441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x51e2c0) on tqpair=0x49aec0 00:22:07.001 [2024-07-15 20:47:41.297448] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:07.001 00:22:07.001 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:07.001 [2024-07-15 20:47:41.334904] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:07.001 [2024-07-15 20:47:41.334959] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768024 ] 00:22:07.001 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.001 [2024-07-15 20:47:41.363901] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:07.001 [2024-07-15 20:47:41.363941] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:07.001 [2024-07-15 20:47:41.363946] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:07.001 [2024-07-15 20:47:41.363955] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:07.001 [2024-07-15 20:47:41.363961] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:07.001 [2024-07-15 20:47:41.364262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:07.001 [2024-07-15 20:47:41.364286] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5d0ec0 0 00:22:07.001 [2024-07-15 20:47:41.379233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:07.001 [2024-07-15 20:47:41.379247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:07.001 [2024-07-15 20:47:41.379251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:07.001 [2024-07-15 20:47:41.379254] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:07.001 [2024-07-15 20:47:41.379281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.379286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.379290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.001 [2024-07-15 20:47:41.379300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:07.001 [2024-07-15 20:47:41.379314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.001 [2024-07-15 20:47:41.387234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.001 [2024-07-15 20:47:41.387241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.001 [2024-07-15 20:47:41.387245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.001 [2024-07-15 20:47:41.387258] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:07.001 [2024-07-15 20:47:41.387267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:07.001 [2024-07-15 20:47:41.387272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:07.001 [2024-07-15 20:47:41.387281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.001 [2024-07-15 20:47:41.387294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.001 [2024-07-15 20:47:41.387308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.001 [2024-07-15 20:47:41.387482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.001 [2024-07-15 20:47:41.387488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.001 [2024-07-15 20:47:41.387491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.001 [2024-07-15 20:47:41.387498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:07.001 [2024-07-15 20:47:41.387505] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:07.001 [2024-07-15 20:47:41.387511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.001 [2024-07-15 20:47:41.387523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.001 [2024-07-15 20:47:41.387533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.001 [2024-07-15 20:47:41.387629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.001 [2024-07-15 20:47:41.387635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.001 [2024-07-15 20:47:41.387638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.001 [2024-07-15 20:47:41.387645] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:07.001 [2024-07-15 20:47:41.387652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:07.001 [2024-07-15 20:47:41.387658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.001 [2024-07-15 20:47:41.387670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.001 [2024-07-15 20:47:41.387679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.001 [2024-07-15 20:47:41.387756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.001 [2024-07-15 20:47:41.387762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.001 [2024-07-15 20:47:41.387765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.001 [2024-07-15 20:47:41.387772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:07.001 [2024-07-15 20:47:41.387781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.001 [2024-07-15 20:47:41.387796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.001 [2024-07-15 20:47:41.387805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.001 [2024-07-15 20:47:41.387881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.001 [2024-07-15 20:47:41.387887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.001 [2024-07-15 20:47:41.387889] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.387893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.001 [2024-07-15 20:47:41.387897] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:07.001 [2024-07-15 20:47:41.387901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:07.001 [2024-07-15 20:47:41.387907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:07.001 [2024-07-15 20:47:41.388012] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:07.001 [2024-07-15 20:47:41.388016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:07.001 [2024-07-15 20:47:41.388023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.388026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.388029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.001 [2024-07-15 20:47:41.388034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.001 [2024-07-15 20:47:41.388044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.001 [2024-07-15 20:47:41.388124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.001 [2024-07-15 20:47:41.388130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.001 [2024-07-15 20:47:41.388133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.388136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.001 [2024-07-15 20:47:41.388140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:07.001 [2024-07-15 20:47:41.388148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.388151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.001 [2024-07-15 20:47:41.388155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.388160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.002 [2024-07-15 20:47:41.388169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.002 [2024-07-15 20:47:41.388275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.002 [2024-07-15 20:47:41.388281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.002 [2024-07-15 20:47:41.388284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.002 [2024-07-15 20:47:41.388291] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:07.002 [2024-07-15 20:47:41.388297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.388304] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:07.002 [2024-07-15 20:47:41.388311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.388319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.388328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.002 [2024-07-15 20:47:41.388337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.002 [2024-07-15 20:47:41.388455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.002 [2024-07-15 20:47:41.388461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.002 [2024-07-15 20:47:41.388464] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388467] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5d0ec0): datao=0, datal=4096, cccid=0 00:22:07.002 [2024-07-15 20:47:41.388471] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x653e40) on tqpair(0x5d0ec0): expected_datao=0, payload_size=4096 00:22:07.002 [2024-07-15 20:47:41.388474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388532] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388536] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.002 [2024-07-15 20:47:41.388622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.002 [2024-07-15 20:47:41.388625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.002 [2024-07-15 20:47:41.388634] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:07.002 [2024-07-15 20:47:41.388641] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:07.002 [2024-07-15 20:47:41.388645] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:07.002 [2024-07-15 20:47:41.388649] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:07.002 [2024-07-15 20:47:41.388652] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:07.002 [2024-07-15 20:47:41.388656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.388664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.388670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.388683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.002 [2024-07-15 20:47:41.388693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.002 [2024-07-15 20:47:41.388773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.002 [2024-07-15 20:47:41.388779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.002 [2024-07-15 20:47:41.388784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.002 [2024-07-15 20:47:41.388793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.388805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.002 [2024-07-15 20:47:41.388810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.388821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.002 [2024-07-15 20:47:41.388826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.388837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.002 [2024-07-15 20:47:41.388842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.388853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.002 [2024-07-15 20:47:41.388857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.388867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.388872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.388876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.388881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.002 [2024-07-15 20:47:41.388892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653e40, cid 0, qid 0 00:22:07.002 [2024-07-15 20:47:41.388897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x653fc0, cid 1, qid 0 00:22:07.002 [2024-07-15 20:47:41.388901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654140, cid 2, qid 0 00:22:07.002 [2024-07-15 20:47:41.388905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.002 [2024-07-15 20:47:41.388909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654440, cid 4, qid 0 00:22:07.002 [2024-07-15 20:47:41.389038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.002 [2024-07-15 20:47:41.389043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.002 [2024-07-15 20:47:41.389046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.389049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654440) on tqpair=0x5d0ec0 00:22:07.002 [2024-07-15 20:47:41.389053] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:07.002 [2024-07-15 20:47:41.389058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.389066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.389072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.389077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.389081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.389084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.389090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.002 [2024-07-15 20:47:41.389099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654440, cid 4, qid 0 00:22:07.002 [2024-07-15 20:47:41.389187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.002 [2024-07-15 20:47:41.389193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.002 [2024-07-15 20:47:41.389196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.389199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654440) on tqpair=0x5d0ec0 00:22:07.002 [2024-07-15 20:47:41.389254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.389264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.389271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.389274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5d0ec0) 00:22:07.002 [2024-07-15 20:47:41.389279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.002 [2024-07-15 20:47:41.389289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654440, cid 4, qid 0 00:22:07.002 [2024-07-15 20:47:41.389379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.002 [2024-07-15 20:47:41.389385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.002 [2024-07-15 20:47:41.389388] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.389392] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5d0ec0): datao=0, datal=4096, cccid=4 00:22:07.002 [2024-07-15 20:47:41.389395] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x654440) on tqpair(0x5d0ec0): expected_datao=0, payload_size=4096 00:22:07.002 [2024-07-15 20:47:41.389399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.389426] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.389430] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.435233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.002 [2024-07-15 20:47:41.435244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.002 [2024-07-15 20:47:41.435247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.002 [2024-07-15 20:47:41.435251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654440) on tqpair=0x5d0ec0 00:22:07.002 [2024-07-15 20:47:41.435261] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:07.002 [2024-07-15 20:47:41.435273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.435283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:07.002 [2024-07-15 20:47:41.435289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.003 [2024-07-15 20:47:41.435295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5d0ec0) 00:22:07.003 [2024-07-15 20:47:41.435302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.003 [2024-07-15 20:47:41.435315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654440, cid 4, qid 0 00:22:07.003 [2024-07-15 20:47:41.435492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.003 [2024-07-15 20:47:41.435498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.003 [2024-07-15 20:47:41.435501] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.003 [2024-07-15 20:47:41.435504] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5d0ec0): datao=0, datal=4096, cccid=4 00:22:07.003 [2024-07-15 20:47:41.435508] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x654440) on tqpair(0x5d0ec0): expected_datao=0, payload_size=4096 00:22:07.003 [2024-07-15 20:47:41.435512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.003 [2024-07-15 20:47:41.435539] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.003 [2024-07-15 20:47:41.435543] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.477372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.264 [2024-07-15 20:47:41.477384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.264 [2024-07-15 20:47:41.477387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.477391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654440) on tqpair=0x5d0ec0 00:22:07.264 [2024-07-15 20:47:41.477403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.477413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.477421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.477424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5d0ec0) 00:22:07.264 [2024-07-15 20:47:41.477431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.264 [2024-07-15 20:47:41.477443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654440, cid 4, qid 0 00:22:07.264 [2024-07-15 20:47:41.477593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.264 [2024-07-15 20:47:41.477598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.264 [2024-07-15 20:47:41.477601] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.477604] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5d0ec0): datao=0, datal=4096, cccid=4 00:22:07.264 [2024-07-15 20:47:41.477608] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x654440) on tqpair(0x5d0ec0): expected_datao=0, payload_size=4096 00:22:07.264 [2024-07-15 20:47:41.477612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.477648] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.477652] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.519415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.264 [2024-07-15 20:47:41.519424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.264 [2024-07-15 20:47:41.519427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.519431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654440) on tqpair=0x5d0ec0 00:22:07.264 [2024-07-15 20:47:41.519438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.519446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.519458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.519463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.519467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.519472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.519476] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:07.264 [2024-07-15 20:47:41.519480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:07.264 [2024-07-15 20:47:41.519485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:07.264 [2024-07-15 20:47:41.519498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.519502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5d0ec0) 00:22:07.264 [2024-07-15 20:47:41.519509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.264 [2024-07-15 20:47:41.519514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.519518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.519521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5d0ec0) 00:22:07.264 [2024-07-15 20:47:41.519526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.264 [2024-07-15 20:47:41.519540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654440, cid 4, qid 0 00:22:07.264 [2024-07-15 20:47:41.519545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6545c0, cid 5, qid 0 00:22:07.264 [2024-07-15 20:47:41.519677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.264 [2024-07-15 20:47:41.519683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.264 [2024-07-15 20:47:41.519685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.264 [2024-07-15 20:47:41.519689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654440) on tqpair=0x5d0ec0 00:22:07.264 [2024-07-15 20:47:41.519694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.264 [2024-07-15 20:47:41.519699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.264 [2024-07-15 20:47:41.519702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.519706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6545c0) on tqpair=0x5d0ec0 00:22:07.265 [2024-07-15 20:47:41.519714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.519718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5d0ec0) 00:22:07.265 [2024-07-15 20:47:41.519723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.265 [2024-07-15 20:47:41.519733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6545c0, cid 5, qid 0 00:22:07.265 [2024-07-15 20:47:41.519814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.265 [2024-07-15 20:47:41.519820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.265 [2024-07-15 20:47:41.519822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.519825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6545c0) on tqpair=0x5d0ec0 00:22:07.265 [2024-07-15 20:47:41.519835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.519839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5d0ec0) 00:22:07.265 [2024-07-15 20:47:41.519844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.265 [2024-07-15 20:47:41.519854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6545c0, cid 5, qid 0 00:22:07.265 [2024-07-15 20:47:41.519977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.265 [2024-07-15 20:47:41.519982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.265 [2024-07-15 20:47:41.519985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.519988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6545c0) on tqpair=0x5d0ec0 00:22:07.265 [2024-07-15 20:47:41.519996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.519999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5d0ec0) 00:22:07.265 [2024-07-15 20:47:41.520005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.265 [2024-07-15 20:47:41.520014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6545c0, cid 5, qid 0 00:22:07.265 [2024-07-15 20:47:41.520127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.265 [2024-07-15 20:47:41.520133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.265 [2024-07-15 20:47:41.520136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6545c0) on tqpair=0x5d0ec0 00:22:07.265 [2024-07-15 20:47:41.520152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5d0ec0) 00:22:07.265 [2024-07-15 20:47:41.520161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.265 [2024-07-15 20:47:41.520168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5d0ec0) 00:22:07.265 [2024-07-15 20:47:41.520176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.265 [2024-07-15 20:47:41.520182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5d0ec0) 00:22:07.265 [2024-07-15 20:47:41.520191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.265 [2024-07-15 20:47:41.520197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5d0ec0) 00:22:07.265 [2024-07-15 20:47:41.520205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.265 [2024-07-15 20:47:41.520215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6545c0, cid 5, qid 0 00:22:07.265 [2024-07-15 20:47:41.520220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654440, cid 4, qid 0 00:22:07.265 [2024-07-15 20:47:41.520230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x654740, cid 6, qid 0 00:22:07.265 [2024-07-15 20:47:41.520234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6548c0, cid 7, qid 0 00:22:07.265 [2024-07-15 20:47:41.520380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.265 [2024-07-15 20:47:41.520388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.265 [2024-07-15 20:47:41.520391] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520394] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5d0ec0): datao=0, datal=8192, cccid=5 00:22:07.265 [2024-07-15 20:47:41.520398] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6545c0) on tqpair(0x5d0ec0): expected_datao=0, payload_size=8192 00:22:07.265 [2024-07-15 20:47:41.520402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520530] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520534] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.265 [2024-07-15 20:47:41.520543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.265 [2024-07-15 20:47:41.520546] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520549] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5d0ec0): datao=0, datal=512, cccid=4 00:22:07.265 [2024-07-15 20:47:41.520553] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x654440) on tqpair(0x5d0ec0): expected_datao=0, payload_size=512 00:22:07.265 [2024-07-15 20:47:41.520556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520562] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520565] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.265 [2024-07-15 20:47:41.520574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.265 [2024-07-15 20:47:41.520577] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520580] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5d0ec0): datao=0, datal=512, cccid=6 00:22:07.265 [2024-07-15 20:47:41.520583] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x654740) on tqpair(0x5d0ec0): expected_datao=0, payload_size=512 00:22:07.265 [2024-07-15 20:47:41.520587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520592] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520595] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.265 [2024-07-15 20:47:41.520605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.265 [2024-07-15 20:47:41.520607] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520610] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5d0ec0): datao=0, datal=4096, cccid=7 00:22:07.265 [2024-07-15 20:47:41.520614] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6548c0) on tqpair(0x5d0ec0): expected_datao=0, payload_size=4096 00:22:07.265 [2024-07-15 20:47:41.520618] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520623] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520626] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.265 [2024-07-15 20:47:41.520638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.265 [2024-07-15 20:47:41.520641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6545c0) on tqpair=0x5d0ec0 00:22:07.265 [2024-07-15 20:47:41.520654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.265 [2024-07-15 20:47:41.520660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.265 [2024-07-15 20:47:41.520663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.265 [2024-07-15 20:47:41.520666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654440) on tqpair=0x5d0ec0 00:22:07.266 [2024-07-15 20:47:41.520675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.266 [2024-07-15 20:47:41.520680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.266 [2024-07-15 20:47:41.520683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.266 [2024-07-15 20:47:41.520686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654740) on tqpair=0x5d0ec0 00:22:07.266 [2024-07-15 20:47:41.520692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.266 [2024-07-15 20:47:41.520697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.266 [2024-07-15 20:47:41.520700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.266 [2024-07-15 20:47:41.520703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6548c0) on tqpair=0x5d0ec0 00:22:07.266 ===================================================== 00:22:07.266 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.266 ===================================================== 00:22:07.266 Controller Capabilities/Features 00:22:07.266 ================================ 00:22:07.266 Vendor ID: 8086 00:22:07.266 Subsystem Vendor ID: 8086 00:22:07.266 Serial Number: SPDK00000000000001 00:22:07.266 Model Number: SPDK bdev Controller 00:22:07.266 Firmware Version: 24.09 00:22:07.266 Recommended Arb Burst: 6 00:22:07.266 IEEE OUI Identifier: e4 d2 5c 00:22:07.266 Multi-path I/O 00:22:07.266 May have multiple subsystem ports: Yes 00:22:07.266 May have multiple controllers: Yes 00:22:07.266 Associated with SR-IOV VF: No 00:22:07.266 Max Data Transfer Size: 131072 00:22:07.266 Max Number of Namespaces: 32 00:22:07.266 Max Number of I/O Queues: 127 00:22:07.266 NVMe Specification Version (VS): 1.3 00:22:07.266 NVMe Specification Version (Identify): 1.3 00:22:07.266 Maximum Queue Entries: 128 00:22:07.266 Contiguous Queues Required: Yes 00:22:07.266 Arbitration Mechanisms Supported 00:22:07.266 Weighted Round Robin: Not Supported 00:22:07.266 Vendor Specific: Not Supported 00:22:07.266 Reset Timeout: 15000 ms 00:22:07.266 Doorbell Stride: 4 bytes 00:22:07.266 NVM Subsystem Reset: Not Supported 00:22:07.266 Command Sets Supported 00:22:07.266 NVM Command Set: Supported 00:22:07.266 Boot Partition: Not Supported 00:22:07.266 Memory Page Size Minimum: 4096 bytes 00:22:07.266 Memory Page Size Maximum: 4096 bytes 00:22:07.266 Persistent Memory Region: Not Supported 00:22:07.266 Optional Asynchronous Events Supported 00:22:07.266 Namespace Attribute Notices: Supported 00:22:07.266 Firmware Activation Notices: Not Supported 00:22:07.266 ANA Change Notices: Not Supported 00:22:07.266 PLE Aggregate Log Change Notices: Not Supported 00:22:07.266 LBA Status Info Alert Notices: Not Supported 00:22:07.266 EGE Aggregate Log Change Notices: Not Supported 00:22:07.266 Normal NVM Subsystem Shutdown event: Not Supported 00:22:07.266 Zone Descriptor Change Notices: Not Supported 00:22:07.266 Discovery Log Change Notices: Not Supported 00:22:07.266 Controller Attributes 00:22:07.266 128-bit Host Identifier: Supported 00:22:07.266 Non-Operational Permissive Mode: Not Supported 00:22:07.266 NVM Sets: Not Supported 00:22:07.266 Read Recovery Levels: Not Supported 00:22:07.266 Endurance Groups: Not Supported 00:22:07.266 Predictable Latency Mode: Not Supported 00:22:07.266 Traffic Based Keep ALive: Not Supported 00:22:07.266 Namespace Granularity: Not Supported 00:22:07.266 SQ Associations: Not Supported 00:22:07.266 UUID List: Not Supported 00:22:07.266 Multi-Domain Subsystem: Not Supported 00:22:07.266 Fixed Capacity Management: Not Supported 00:22:07.266 Variable Capacity Management: Not Supported 00:22:07.266 Delete Endurance Group: Not Supported 00:22:07.266 Delete NVM Set: Not Supported 00:22:07.266 Extended LBA Formats Supported: Not Supported 00:22:07.266 Flexible Data Placement Supported: Not Supported 00:22:07.266 00:22:07.266 Controller Memory Buffer Support 00:22:07.266 ================================ 00:22:07.266 Supported: No 00:22:07.266 00:22:07.266 Persistent Memory Region Support 00:22:07.266 ================================ 00:22:07.266 Supported: No 00:22:07.266 00:22:07.266 Admin Command Set Attributes 00:22:07.266 ============================ 00:22:07.266 Security Send/Receive: Not Supported 00:22:07.266 Format NVM: Not Supported 00:22:07.266 Firmware Activate/Download: Not Supported 00:22:07.266 Namespace Management: Not Supported 00:22:07.266 Device Self-Test: Not Supported 00:22:07.266 Directives: Not Supported 00:22:07.266 NVMe-MI: Not Supported 00:22:07.266 Virtualization Management: Not Supported 00:22:07.266 Doorbell Buffer Config: Not Supported 00:22:07.266 Get LBA Status Capability: Not Supported 00:22:07.266 Command & Feature Lockdown Capability: Not Supported 00:22:07.266 Abort Command Limit: 4 00:22:07.266 Async Event Request Limit: 4 00:22:07.266 Number of Firmware Slots: N/A 00:22:07.266 Firmware Slot 1 Read-Only: N/A 00:22:07.266 Firmware Activation Without Reset: N/A 00:22:07.266 Multiple Update Detection Support: N/A 00:22:07.266 Firmware Update Granularity: No Information Provided 00:22:07.266 Per-Namespace SMART Log: No 00:22:07.266 Asymmetric Namespace Access Log Page: Not Supported 00:22:07.266 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:07.266 Command Effects Log Page: Supported 00:22:07.266 Get Log Page Extended Data: Supported 00:22:07.266 Telemetry Log Pages: Not Supported 00:22:07.266 Persistent Event Log Pages: Not Supported 00:22:07.266 Supported Log Pages Log Page: May Support 00:22:07.266 Commands Supported & Effects Log Page: Not Supported 00:22:07.266 Feature Identifiers & Effects Log Page:May Support 00:22:07.266 NVMe-MI Commands & Effects Log Page: May Support 00:22:07.266 Data Area 4 for Telemetry Log: Not Supported 00:22:07.266 Error Log Page Entries Supported: 128 00:22:07.266 Keep Alive: Supported 00:22:07.266 Keep Alive Granularity: 10000 ms 00:22:07.266 00:22:07.266 NVM Command Set Attributes 00:22:07.266 ========================== 00:22:07.266 Submission Queue Entry Size 00:22:07.266 Max: 64 00:22:07.266 Min: 64 00:22:07.267 Completion Queue Entry Size 00:22:07.267 Max: 16 00:22:07.267 Min: 16 00:22:07.267 Number of Namespaces: 32 00:22:07.267 Compare Command: Supported 00:22:07.267 Write Uncorrectable Command: Not Supported 00:22:07.267 Dataset Management Command: Supported 00:22:07.267 Write Zeroes Command: Supported 00:22:07.267 Set Features Save Field: Not Supported 00:22:07.267 Reservations: Supported 00:22:07.267 Timestamp: Not Supported 00:22:07.267 Copy: Supported 00:22:07.267 Volatile Write Cache: Present 00:22:07.267 Atomic Write Unit (Normal): 1 00:22:07.267 Atomic Write Unit (PFail): 1 00:22:07.267 Atomic Compare & Write Unit: 1 00:22:07.267 Fused Compare & Write: Supported 00:22:07.267 Scatter-Gather List 00:22:07.267 SGL Command Set: Supported 00:22:07.267 SGL Keyed: Supported 00:22:07.267 SGL Bit Bucket Descriptor: Not Supported 00:22:07.267 SGL Metadata Pointer: Not Supported 00:22:07.267 Oversized SGL: Not Supported 00:22:07.267 SGL Metadata Address: Not Supported 00:22:07.267 SGL Offset: Supported 00:22:07.267 Transport SGL Data Block: Not Supported 00:22:07.267 Replay Protected Memory Block: Not Supported 00:22:07.267 00:22:07.267 Firmware Slot Information 00:22:07.267 ========================= 00:22:07.267 Active slot: 1 00:22:07.267 Slot 1 Firmware Revision: 24.09 00:22:07.267 00:22:07.267 00:22:07.267 Commands Supported and Effects 00:22:07.267 ============================== 00:22:07.267 Admin Commands 00:22:07.267 -------------- 00:22:07.267 Get Log Page (02h): Supported 00:22:07.267 Identify (06h): Supported 00:22:07.267 Abort (08h): Supported 00:22:07.267 Set Features (09h): Supported 00:22:07.267 Get Features (0Ah): Supported 00:22:07.267 Asynchronous Event Request (0Ch): Supported 00:22:07.267 Keep Alive (18h): Supported 00:22:07.267 I/O Commands 00:22:07.267 ------------ 00:22:07.267 Flush (00h): Supported LBA-Change 00:22:07.267 Write (01h): Supported LBA-Change 00:22:07.267 Read (02h): Supported 00:22:07.267 Compare (05h): Supported 00:22:07.267 Write Zeroes (08h): Supported LBA-Change 00:22:07.267 Dataset Management (09h): Supported LBA-Change 00:22:07.267 Copy (19h): Supported LBA-Change 00:22:07.267 00:22:07.267 Error Log 00:22:07.267 ========= 00:22:07.267 00:22:07.267 Arbitration 00:22:07.267 =========== 00:22:07.267 Arbitration Burst: 1 00:22:07.267 00:22:07.267 Power Management 00:22:07.267 ================ 00:22:07.267 Number of Power States: 1 00:22:07.267 Current Power State: Power State #0 00:22:07.267 Power State #0: 00:22:07.267 Max Power: 0.00 W 00:22:07.267 Non-Operational State: Operational 00:22:07.267 Entry Latency: Not Reported 00:22:07.267 Exit Latency: Not Reported 00:22:07.267 Relative Read Throughput: 0 00:22:07.267 Relative Read Latency: 0 00:22:07.267 Relative Write Throughput: 0 00:22:07.267 Relative Write Latency: 0 00:22:07.267 Idle Power: Not Reported 00:22:07.267 Active Power: Not Reported 00:22:07.267 Non-Operational Permissive Mode: Not Supported 00:22:07.267 00:22:07.267 Health Information 00:22:07.267 ================== 00:22:07.267 Critical Warnings: 00:22:07.267 Available Spare Space: OK 00:22:07.267 Temperature: OK 00:22:07.267 Device Reliability: OK 00:22:07.267 Read Only: No 00:22:07.267 Volatile Memory Backup: OK 00:22:07.267 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:07.267 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:07.267 Available Spare: 0% 00:22:07.267 Available Spare Threshold: 0% 00:22:07.267 Life Percentage Used:[2024-07-15 20:47:41.520786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.267 [2024-07-15 20:47:41.520790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5d0ec0) 00:22:07.267 [2024-07-15 20:47:41.520796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.267 [2024-07-15 20:47:41.520808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6548c0, cid 7, qid 0 00:22:07.267 [2024-07-15 20:47:41.520939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.267 [2024-07-15 20:47:41.520945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.267 [2024-07-15 20:47:41.520948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.267 [2024-07-15 20:47:41.520951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6548c0) on tqpair=0x5d0ec0 00:22:07.267 [2024-07-15 20:47:41.520979] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:07.267 [2024-07-15 20:47:41.520988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653e40) on tqpair=0x5d0ec0 00:22:07.267 [2024-07-15 20:47:41.520993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.267 [2024-07-15 20:47:41.520998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x653fc0) on tqpair=0x5d0ec0 00:22:07.267 [2024-07-15 20:47:41.521002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.267 [2024-07-15 20:47:41.521006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x654140) on tqpair=0x5d0ec0 00:22:07.267 [2024-07-15 20:47:41.521010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.267 [2024-07-15 20:47:41.521014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.267 [2024-07-15 20:47:41.521018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.267 [2024-07-15 20:47:41.521024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.267 [2024-07-15 20:47:41.521028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.267 [2024-07-15 20:47:41.521031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.267 [2024-07-15 20:47:41.521037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.267 [2024-07-15 20:47:41.521047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.267 [2024-07-15 20:47:41.521139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.267 [2024-07-15 20:47:41.521144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.267 [2024-07-15 20:47:41.521147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.521156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.521170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.521183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.521272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.521278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.521281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.521288] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:07.268 [2024-07-15 20:47:41.521292] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:07.268 [2024-07-15 20:47:41.521299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.521311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.521321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.521441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.521446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.521449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.521460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.521472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.521481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.521591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.521597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.521600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.521611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.521623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.521632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.521743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.521749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.521752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.521765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.521777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.521786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.521863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.521868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.521871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.521883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.521889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.521895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.521904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.521996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.522001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.522004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.522007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.522015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.522019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.522022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.522027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.522036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.522147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.522153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.522156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.522159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.522167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.522170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.522173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.522179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.522187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.526230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.526238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.526241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.526244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.526256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.526260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.526263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5d0ec0) 00:22:07.268 [2024-07-15 20:47:41.526269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.268 [2024-07-15 20:47:41.526280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6542c0, cid 3, qid 0 00:22:07.268 [2024-07-15 20:47:41.526426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.268 [2024-07-15 20:47:41.526432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.268 [2024-07-15 20:47:41.526435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.268 [2024-07-15 20:47:41.526438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6542c0) on tqpair=0x5d0ec0 00:22:07.268 [2024-07-15 20:47:41.526444] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:07.268 0% 00:22:07.268 Data Units Read: 0 00:22:07.268 Data Units Written: 0 00:22:07.268 Host Read Commands: 0 00:22:07.268 Host Write Commands: 0 00:22:07.268 Controller Busy Time: 0 minutes 00:22:07.268 Power Cycles: 0 00:22:07.268 Power On Hours: 0 hours 00:22:07.268 Unsafe Shutdowns: 0 00:22:07.268 Unrecoverable Media Errors: 0 00:22:07.268 Lifetime Error Log Entries: 0 00:22:07.268 Warning Temperature Time: 0 minutes 00:22:07.268 Critical Temperature Time: 0 minutes 00:22:07.268 00:22:07.268 Number of Queues 00:22:07.268 ================ 00:22:07.268 Number of I/O Submission Queues: 127 00:22:07.268 Number of I/O Completion Queues: 127 00:22:07.268 00:22:07.268 Active Namespaces 00:22:07.268 ================= 00:22:07.268 Namespace ID:1 00:22:07.268 Error Recovery Timeout: Unlimited 00:22:07.269 Command Set Identifier: NVM (00h) 00:22:07.269 Deallocate: Supported 00:22:07.269 Deallocated/Unwritten Error: Not Supported 00:22:07.269 Deallocated Read Value: Unknown 00:22:07.269 Deallocate in Write Zeroes: Not Supported 00:22:07.269 Deallocated Guard Field: 0xFFFF 00:22:07.269 Flush: Supported 00:22:07.269 Reservation: Supported 00:22:07.269 Namespace Sharing Capabilities: Multiple Controllers 00:22:07.269 Size (in LBAs): 131072 (0GiB) 00:22:07.269 Capacity (in LBAs): 131072 (0GiB) 00:22:07.269 Utilization (in LBAs): 131072 (0GiB) 00:22:07.269 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:07.269 EUI64: ABCDEF0123456789 00:22:07.269 UUID: 9ba3b1c6-c515-44cf-bb70-b339824fa5d3 00:22:07.269 Thin Provisioning: Not Supported 00:22:07.269 Per-NS Atomic Units: Yes 00:22:07.269 Atomic Boundary Size (Normal): 0 00:22:07.269 Atomic Boundary Size (PFail): 0 00:22:07.269 Atomic Boundary Offset: 0 00:22:07.269 Maximum Single Source Range Length: 65535 00:22:07.269 Maximum Copy Length: 65535 00:22:07.269 Maximum Source Range Count: 1 00:22:07.269 NGUID/EUI64 Never Reused: No 00:22:07.269 Namespace Write Protected: No 00:22:07.269 Number of LBA Formats: 1 00:22:07.269 Current LBA Format: LBA Format #00 00:22:07.269 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:07.269 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:07.269 rmmod nvme_tcp 00:22:07.269 rmmod nvme_fabrics 00:22:07.269 rmmod nvme_keyring 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2767775 ']' 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2767775 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2767775 ']' 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2767775 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2767775 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2767775' 00:22:07.269 killing process with pid 2767775 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2767775 00:22:07.269 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2767775 00:22:07.562 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:07.562 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:07.562 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:07.562 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.562 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.562 20:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.562 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.562 20:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.475 20:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:09.475 00:22:09.475 real 0m9.255s 00:22:09.475 user 0m7.896s 00:22:09.475 sys 0m4.348s 00:22:09.475 20:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:09.475 20:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.475 ************************************ 00:22:09.475 END TEST nvmf_identify 00:22:09.475 ************************************ 00:22:09.734 20:47:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:09.734 20:47:43 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:09.734 20:47:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:09.734 20:47:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:09.734 20:47:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:09.734 ************************************ 00:22:09.734 START TEST nvmf_perf 00:22:09.734 ************************************ 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:09.734 * Looking for test storage... 00:22:09.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.734 20:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:16.300 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:16.300 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:16.300 Found net devices under 0000:86:00.0: cvl_0_0 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:16.300 Found net devices under 0000:86:00.1: cvl_0_1 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.300 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:16.300 00:22:16.300 --- 10.0.0.2 ping statistics --- 00:22:16.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.300 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:22:16.301 00:22:16.301 --- 10.0.0.1 ping statistics --- 00:22:16.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.301 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2771537 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2771537 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2771537 ']' 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.301 20:47:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:16.301 [2024-07-15 20:47:49.845747] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:16.301 [2024-07-15 20:47:49.845793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.301 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.301 [2024-07-15 20:47:49.903442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.301 [2024-07-15 20:47:49.979561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.301 [2024-07-15 20:47:49.979602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.301 [2024-07-15 20:47:49.979610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.301 [2024-07-15 20:47:49.979616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.301 [2024-07-15 20:47:49.979621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.301 [2024-07-15 20:47:49.979671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.301 [2024-07-15 20:47:49.979770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.301 [2024-07-15 20:47:49.979846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.301 [2024-07-15 20:47:49.979848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.301 20:47:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.301 20:47:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:16.301 20:47:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.301 20:47:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.301 20:47:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:16.301 20:47:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.301 20:47:50 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:16.301 20:47:50 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:19.586 20:47:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:19.586 20:47:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:19.586 20:47:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:19.586 20:47:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:19.866 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:19.866 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:19.866 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:19.866 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:19.866 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:19.866 [2024-07-15 20:47:54.243793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.866 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.124 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:20.124 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:20.382 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:20.382 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:20.382 20:47:54 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.640 [2024-07-15 20:47:54.988037] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.640 20:47:55 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:20.897 20:47:55 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:20.897 20:47:55 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:20.897 20:47:55 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:20.897 20:47:55 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:22.269 Initializing NVMe Controllers 00:22:22.269 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:22.269 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:22.269 Initialization complete. Launching workers. 00:22:22.269 ======================================================== 00:22:22.269 Latency(us) 00:22:22.269 Device Information : IOPS MiB/s Average min max 00:22:22.269 PCIE (0000:5e:00.0) NSID 1 from core 0: 97046.73 379.09 329.33 14.96 6191.87 00:22:22.269 ======================================================== 00:22:22.269 Total : 97046.73 379.09 329.33 14.96 6191.87 00:22:22.269 00:22:22.269 20:47:56 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:22.269 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.202 Initializing NVMe Controllers 00:22:23.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:23.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:23.202 Initialization complete. Launching workers. 00:22:23.202 ======================================================== 00:22:23.202 Latency(us) 00:22:23.202 Device Information : IOPS MiB/s Average min max 00:22:23.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.96 0.30 13251.48 111.26 45783.80 00:22:23.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.98 0.18 21627.24 6982.68 51874.10 00:22:23.202 ======================================================== 00:22:23.202 Total : 124.93 0.49 16400.77 111.26 51874.10 00:22:23.202 00:22:23.202 20:47:57 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:23.460 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.396 Initializing NVMe Controllers 00:22:24.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:24.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:24.396 Initialization complete. Launching workers. 00:22:24.396 ======================================================== 00:22:24.396 Latency(us) 00:22:24.396 Device Information : IOPS MiB/s Average min max 00:22:24.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10755.99 42.02 2988.71 434.72 6369.04 00:22:24.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3874.00 15.13 8298.60 5685.53 15988.84 00:22:24.396 ======================================================== 00:22:24.396 Total : 14629.99 57.15 4394.76 434.72 15988.84 00:22:24.396 00:22:24.655 20:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:24.655 20:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:24.655 20:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:24.655 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.185 Initializing NVMe Controllers 00:22:27.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.185 Controller IO queue size 128, less than required. 00:22:27.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:27.185 Controller IO queue size 128, less than required. 00:22:27.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:27.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:27.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:27.185 Initialization complete. Launching workers. 00:22:27.185 ======================================================== 00:22:27.185 Latency(us) 00:22:27.185 Device Information : IOPS MiB/s Average min max 00:22:27.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1149.72 287.43 115418.65 72864.44 200271.25 00:22:27.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 619.81 154.95 212231.17 47189.74 337764.55 00:22:27.185 ======================================================== 00:22:27.185 Total : 1769.53 442.38 149328.99 47189.74 337764.55 00:22:27.185 00:22:27.185 20:48:01 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:27.185 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.185 No valid NVMe controllers or AIO or URING devices found 00:22:27.185 Initializing NVMe Controllers 00:22:27.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.185 Controller IO queue size 128, less than required. 00:22:27.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:27.185 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:27.185 Controller IO queue size 128, less than required. 00:22:27.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:27.185 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:27.185 WARNING: Some requested NVMe devices were skipped 00:22:27.185 20:48:01 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:27.185 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.713 Initializing NVMe Controllers 00:22:29.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.713 Controller IO queue size 128, less than required. 00:22:29.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.713 Controller IO queue size 128, less than required. 00:22:29.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:29.713 Initialization complete. Launching workers. 00:22:29.713 00:22:29.713 ==================== 00:22:29.713 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:29.713 TCP transport: 00:22:29.713 polls: 33952 00:22:29.713 idle_polls: 11429 00:22:29.713 sock_completions: 22523 00:22:29.713 nvme_completions: 4739 00:22:29.713 submitted_requests: 7142 00:22:29.713 queued_requests: 1 00:22:29.713 00:22:29.713 ==================== 00:22:29.713 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:29.713 TCP transport: 00:22:29.713 polls: 39665 00:22:29.713 idle_polls: 17367 00:22:29.713 sock_completions: 22298 00:22:29.713 nvme_completions: 4807 00:22:29.713 submitted_requests: 7210 00:22:29.713 queued_requests: 1 00:22:29.713 ======================================================== 00:22:29.713 Latency(us) 00:22:29.713 Device Information : IOPS MiB/s Average min max 00:22:29.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1184.46 296.12 110592.84 55097.60 177461.02 00:22:29.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1201.46 300.37 109334.13 50424.11 156282.00 00:22:29.714 ======================================================== 00:22:29.714 Total : 2385.93 596.48 109959.00 50424.11 177461.02 00:22:29.714 00:22:29.714 20:48:03 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:29.714 20:48:03 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.714 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.714 rmmod nvme_tcp 00:22:29.714 rmmod nvme_fabrics 00:22:29.971 rmmod nvme_keyring 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2771537 ']' 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2771537 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2771537 ']' 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2771537 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2771537 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2771537' 00:22:29.971 killing process with pid 2771537 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2771537 00:22:29.971 20:48:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2771537 00:22:31.341 20:48:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.341 20:48:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.341 20:48:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.341 20:48:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.341 20:48:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.341 20:48:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.341 20:48:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.341 20:48:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.876 20:48:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:33.876 00:22:33.876 real 0m23.855s 00:22:33.876 user 1m3.543s 00:22:33.876 sys 0m7.108s 00:22:33.876 20:48:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.876 20:48:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:33.876 ************************************ 00:22:33.876 END TEST nvmf_perf 00:22:33.876 ************************************ 00:22:33.876 20:48:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:33.876 20:48:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:33.876 20:48:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:33.876 20:48:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.876 20:48:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.876 ************************************ 00:22:33.876 START TEST nvmf_fio_host 00:22:33.876 ************************************ 00:22:33.876 20:48:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:33.876 * Looking for test storage... 00:22:33.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.876 20:48:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.184 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:39.185 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:39.185 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:39.185 Found net devices under 0000:86:00.0: cvl_0_0 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:39.185 Found net devices under 0000:86:00.1: cvl_0_1 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:22:39.185 00:22:39.185 --- 10.0.0.2 ping statistics --- 00:22:39.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.185 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:22:39.185 00:22:39.185 --- 10.0.0.1 ping statistics --- 00:22:39.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.185 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2778147 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2778147 00:22:39.185 20:48:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2778147 ']' 00:22:39.186 20:48:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.186 20:48:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.186 20:48:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.186 20:48:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.186 20:48:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.186 [2024-07-15 20:48:13.650278] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:39.186 [2024-07-15 20:48:13.650326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.442 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.443 [2024-07-15 20:48:13.706699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.443 [2024-07-15 20:48:13.788712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.443 [2024-07-15 20:48:13.788746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.443 [2024-07-15 20:48:13.788753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.443 [2024-07-15 20:48:13.788759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.443 [2024-07-15 20:48:13.788763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.443 [2024-07-15 20:48:13.788806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.443 [2024-07-15 20:48:13.788906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.443 [2024-07-15 20:48:13.788967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.443 [2024-07-15 20:48:13.788968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.007 20:48:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.007 20:48:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:40.007 20:48:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:40.263 [2024-07-15 20:48:14.623692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.263 20:48:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:40.263 20:48:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.263 20:48:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.263 20:48:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:40.520 Malloc1 00:22:40.520 20:48:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.776 20:48:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:40.776 20:48:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.033 [2024-07-15 20:48:15.397739] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.033 20:48:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:41.290 20:48:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:41.547 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:41.547 fio-3.35 00:22:41.547 Starting 1 thread 00:22:41.547 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.071 00:22:44.071 test: (groupid=0, jobs=1): err= 0: pid=2778741: Mon Jul 15 20:48:18 2024 00:22:44.071 read: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(92.0MiB/2005msec) 00:22:44.071 slat (nsec): min=1609, max=245727, avg=1759.64, stdev=2259.42 00:22:44.071 clat (usec): min=4145, max=10429, avg=6045.02, stdev=439.78 00:22:44.071 lat (usec): min=4181, max=10430, avg=6046.78, stdev=439.71 00:22:44.071 clat percentiles (usec): 00:22:44.071 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5735], 00:22:44.071 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:22:44.071 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:22:44.071 | 99.00th=[ 7046], 99.50th=[ 7111], 99.90th=[ 8848], 99.95th=[ 9634], 00:22:44.071 | 99.99th=[10290] 00:22:44.071 bw ( KiB/s): min=46000, max=47576, per=99.98%, avg=46976.00, stdev=694.76, samples=4 00:22:44.071 iops : min=11500, max=11894, avg=11744.00, stdev=173.69, samples=4 00:22:44.071 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.4MiB/2005msec); 0 zone resets 00:22:44.071 slat (nsec): min=1656, max=238495, avg=1844.13, stdev=1726.37 00:22:44.071 clat (usec): min=2498, max=9389, avg=4842.12, stdev=375.22 00:22:44.071 lat (usec): min=2514, max=9391, avg=4843.97, stdev=375.19 00:22:44.071 clat percentiles (usec): 00:22:44.071 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:22:44.071 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:22:44.071 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:44.071 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7570], 99.95th=[ 8848], 00:22:44.071 | 99.99th=[ 9372] 00:22:44.071 bw ( KiB/s): min=46336, max=47176, per=99.98%, avg=46690.00, stdev=372.57, samples=4 00:22:44.071 iops : min=11584, max=11794, avg=11672.50, stdev=93.14, samples=4 00:22:44.071 lat (msec) : 4=0.64%, 10=99.35%, 20=0.01% 00:22:44.071 cpu : usr=68.96%, sys=27.54%, ctx=81, majf=0, minf=6 00:22:44.071 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:44.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.071 issued rwts: total=23552,23408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.071 00:22:44.071 Run status group 0 (all jobs): 00:22:44.071 READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=92.0MiB (96.5MB), run=2005-2005msec 00:22:44.071 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.4MiB (95.9MB), run=2005-2005msec 00:22:44.071 20:48:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:44.072 20:48:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.072 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:44.072 fio-3.35 00:22:44.072 Starting 1 thread 00:22:44.330 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.860 00:22:46.860 test: (groupid=0, jobs=1): err= 0: pid=2779218: Mon Jul 15 20:48:20 2024 00:22:46.860 read: IOPS=10.5k, BW=164MiB/s (172MB/s)(328MiB/2005msec) 00:22:46.860 slat (nsec): min=2618, max=84708, avg=2881.88, stdev=1334.08 00:22:46.860 clat (usec): min=2458, max=14216, avg=7262.98, stdev=1936.40 00:22:46.860 lat (usec): min=2461, max=14230, avg=7265.87, stdev=1936.56 00:22:46.860 clat percentiles (usec): 00:22:46.860 | 1.00th=[ 3556], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5604], 00:22:46.860 | 30.00th=[ 6128], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7635], 00:22:46.860 | 70.00th=[ 8160], 80.00th=[ 8717], 90.00th=[ 9765], 95.00th=[10945], 00:22:46.860 | 99.00th=[12518], 99.50th=[13042], 99.90th=[13566], 99.95th=[13698], 00:22:46.860 | 99.99th=[14222] 00:22:46.860 bw ( KiB/s): min=77184, max=92608, per=50.76%, avg=85064.00, stdev=6625.28, samples=4 00:22:46.860 iops : min= 4824, max= 5788, avg=5316.50, stdev=414.08, samples=4 00:22:46.860 write: IOPS=5984, BW=93.5MiB/s (98.0MB/s)(174MiB/1859msec); 0 zone resets 00:22:46.860 slat (usec): min=30, max=379, avg=32.24, stdev= 7.69 00:22:46.860 clat (usec): min=2661, max=15141, avg=8604.06, stdev=1588.15 00:22:46.860 lat (usec): min=2692, max=15258, avg=8636.30, stdev=1590.13 00:22:46.860 clat percentiles (usec): 00:22:46.860 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7242], 00:22:46.860 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:22:46.860 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11600], 00:22:46.860 | 99.00th=[12911], 99.50th=[13698], 99.90th=[14877], 99.95th=[15008], 00:22:46.860 | 99.99th=[15139] 00:22:46.860 bw ( KiB/s): min=81408, max=95744, per=92.59%, avg=88656.00, stdev=6233.16, samples=4 00:22:46.860 iops : min= 5088, max= 5984, avg=5541.00, stdev=389.57, samples=4 00:22:46.860 lat (msec) : 4=1.87%, 10=85.88%, 20=12.25% 00:22:46.860 cpu : usr=83.58%, sys=14.92%, ctx=34, majf=0, minf=3 00:22:46.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:46.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:46.860 issued rwts: total=21000,11125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:46.860 00:22:46.860 Run status group 0 (all jobs): 00:22:46.860 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=328MiB (344MB), run=2005-2005msec 00:22:46.860 WRITE: bw=93.5MiB/s (98.0MB/s), 93.5MiB/s-93.5MiB/s (98.0MB/s-98.0MB/s), io=174MiB (182MB), run=1859-1859msec 00:22:46.860 20:48:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.860 rmmod nvme_tcp 00:22:46.860 rmmod nvme_fabrics 00:22:46.860 rmmod nvme_keyring 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2778147 ']' 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2778147 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2778147 ']' 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2778147 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2778147 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2778147' 00:22:46.860 killing process with pid 2778147 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2778147 00:22:46.860 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2778147 00:22:47.119 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.119 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:47.119 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:47.119 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.119 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.119 20:48:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.119 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.119 20:48:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.018 20:48:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.018 00:22:49.018 real 0m15.515s 00:22:49.018 user 0m46.592s 00:22:49.018 sys 0m6.247s 00:22:49.018 20:48:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:49.018 20:48:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.018 ************************************ 00:22:49.018 END TEST nvmf_fio_host 00:22:49.018 ************************************ 00:22:49.018 20:48:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:49.018 20:48:23 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:49.018 20:48:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:49.018 20:48:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.018 20:48:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.276 ************************************ 00:22:49.276 START TEST nvmf_failover 00:22:49.276 ************************************ 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:49.276 * Looking for test storage... 00:22:49.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.276 20:48:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.277 20:48:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:54.540 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:54.540 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:54.540 Found net devices under 0000:86:00.0: cvl_0_0 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:54.540 Found net devices under 0000:86:00.1: cvl_0_1 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:22:54.540 00:22:54.540 --- 10.0.0.2 ping statistics --- 00:22:54.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.540 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:22:54.540 00:22:54.540 --- 10.0.0.1 ping statistics --- 00:22:54.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.540 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.540 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2783059 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2783059 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2783059 ']' 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.541 20:48:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:54.541 [2024-07-15 20:48:28.967943] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:22:54.541 [2024-07-15 20:48:28.967984] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.541 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.800 [2024-07-15 20:48:29.024784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:54.800 [2024-07-15 20:48:29.106888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.800 [2024-07-15 20:48:29.106920] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.800 [2024-07-15 20:48:29.106927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.800 [2024-07-15 20:48:29.106933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.800 [2024-07-15 20:48:29.106938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.800 [2024-07-15 20:48:29.107033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.800 [2024-07-15 20:48:29.107131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.800 [2024-07-15 20:48:29.107132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.366 20:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.366 20:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:55.366 20:48:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.366 20:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.366 20:48:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.366 20:48:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.366 20:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:55.624 [2024-07-15 20:48:29.979194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.624 20:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:55.882 Malloc0 00:22:55.882 20:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.140 20:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.140 20:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.398 [2024-07-15 20:48:30.738570] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.398 20:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:56.656 [2024-07-15 20:48:30.919104] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:56.656 20:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:56.656 [2024-07-15 20:48:31.091626] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2783325 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2783325 /var/tmp/bdevperf.sock 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2783325 ']' 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.656 20:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.588 20:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.588 20:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:57.588 20:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.844 NVMe0n1 00:22:58.102 20:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.102 00:22:58.102 20:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2783569 00:22:58.102 20:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:58.102 20:48:32 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.519 20:48:33 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.519 20:48:33 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:02.798 20:48:36 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.798 00:23:02.798 20:48:37 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:03.057 [2024-07-15 20:48:37.347060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 [2024-07-15 20:48:37.347395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107c460 is same with the state(5) to be set 00:23:03.057 20:48:37 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:06.339 20:48:40 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.339 [2024-07-15 20:48:40.548387] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.339 20:48:40 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:07.270 20:48:41 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:07.270 [2024-07-15 20:48:41.742594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.270 [2024-07-15 20:48:41.742858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235ac0 is same with the state(5) to be set 00:23:07.528 20:48:41 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2783569 00:23:14.088 0 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2783325 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2783325 ']' 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2783325 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2783325 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2783325' 00:23:14.088 killing process with pid 2783325 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2783325 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2783325 00:23:14.088 20:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:14.088 [2024-07-15 20:48:31.152641] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:14.088 [2024-07-15 20:48:31.152694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783325 ] 00:23:14.088 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.088 [2024-07-15 20:48:31.208521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.088 [2024-07-15 20:48:31.283561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.088 Running I/O for 15 seconds... 00:23:14.088 [2024-07-15 20:48:33.751242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.088 [2024-07-15 20:48:33.751286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.088 [2024-07-15 20:48:33.751310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.088 [2024-07-15 20:48:33.751327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.088 [2024-07-15 20:48:33.751342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.088 [2024-07-15 20:48:33.751489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.088 [2024-07-15 20:48:33.751505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.088 [2024-07-15 20:48:33.751522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.088 [2024-07-15 20:48:33.751894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.088 [2024-07-15 20:48:33.751903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.751910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.751918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.751924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.751932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.751939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.751947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.751953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.751961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.751968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.751976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.751983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.751991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.751998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.089 [2024-07-15 20:48:33.752510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.089 [2024-07-15 20:48:33.752524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.089 [2024-07-15 20:48:33.752642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.089 [2024-07-15 20:48:33.752649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.090 [2024-07-15 20:48:33.752663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.090 [2024-07-15 20:48:33.752678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.090 [2024-07-15 20:48:33.752692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.090 [2024-07-15 20:48:33.752707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.090 [2024-07-15 20:48:33.752721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.090 [2024-07-15 20:48:33.752736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.090 [2024-07-15 20:48:33.752751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.752991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.752999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:33.753214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45300 is same with the state(5) to be set 00:23:14.090 [2024-07-15 20:48:33.753232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.090 [2024-07-15 20:48:33.753238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.090 [2024-07-15 20:48:33.753244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96152 len:8 PRP1 0x0 PRP2 0x0 00:23:14.090 [2024-07-15 20:48:33.753251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753293] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc45300 was disconnected and freed. reset controller. 00:23:14.090 [2024-07-15 20:48:33.753302] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:14.090 [2024-07-15 20:48:33.753323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.090 [2024-07-15 20:48:33.753330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.090 [2024-07-15 20:48:33.753345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.090 [2024-07-15 20:48:33.753359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.090 [2024-07-15 20:48:33.753374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:33.753380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:14.090 [2024-07-15 20:48:33.756243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.090 [2024-07-15 20:48:33.756271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc27540 (9): Bad file descriptor 00:23:14.090 [2024-07-15 20:48:33.787124] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:14.090 [2024-07-15 20:48:37.348295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-07-15 20:48:37.348329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.090 [2024-07-15 20:48:37.348343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-07-15 20:48:37.348918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.091 [2024-07-15 20:48:37.348933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.091 [2024-07-15 20:48:37.348948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.091 [2024-07-15 20:48:37.348963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.091 [2024-07-15 20:48:37.348971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.348977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.348985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.348991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-07-15 20:48:37.349387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-07-15 20:48:37.349401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-07-15 20:48:37.349415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.092 [2024-07-15 20:48:37.349710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.092 [2024-07-15 20:48:37.349718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.093 [2024-07-15 20:48:37.349893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.349921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29216 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.349928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.349942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.349948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29224 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.349954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.349965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.349971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29232 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.349978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.349985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.349992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.349997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29240 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29248 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29256 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29264 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29272 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29280 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29288 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29296 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29304 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29312 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29320 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29328 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29336 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29344 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.350351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.350359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.350364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.350370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29352 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.361142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.361154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.361160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.361169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29360 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.361175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.361183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.361188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.361194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29368 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.361201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.361208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.361213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.361218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29376 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.361236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.361244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.361250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.361255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29384 len:8 PRP1 0x0 PRP2 0x0 00:23:14.093 [2024-07-15 20:48:37.361264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.093 [2024-07-15 20:48:37.361271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.093 [2024-07-15 20:48:37.361276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.093 [2024-07-15 20:48:37.361281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29392 len:8 PRP1 0x0 PRP2 0x0 00:23:14.094 [2024-07-15 20:48:37.361288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:37.361330] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdf2170 was disconnected and freed. reset controller. 00:23:14.094 [2024-07-15 20:48:37.361339] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:14.094 [2024-07-15 20:48:37.361360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.094 [2024-07-15 20:48:37.361368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:37.361376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.094 [2024-07-15 20:48:37.361382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:37.361389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.094 [2024-07-15 20:48:37.361396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:37.361403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.094 [2024-07-15 20:48:37.361411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:37.361417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:14.094 [2024-07-15 20:48:37.361445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc27540 (9): Bad file descriptor 00:23:14.094 [2024-07-15 20:48:37.364264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.094 [2024-07-15 20:48:37.391466] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:14.094 [2024-07-15 20:48:41.743625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.743993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.743999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.094 [2024-07-15 20:48:41.744151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.094 [2024-07-15 20:48:41.744307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.094 [2024-07-15 20:48:41.744317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.095 [2024-07-15 20:48:41.744971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.095 [2024-07-15 20:48:41.744977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.744985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.744991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.745007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.745022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.745036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.745050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.745066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.745082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.745096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.096 [2024-07-15 20:48:41.745112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28288 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28296 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28304 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28312 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28320 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28328 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28336 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28344 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28352 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28360 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28368 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28376 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28384 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28392 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28400 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28408 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28416 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28424 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28432 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.096 [2024-07-15 20:48:41.745624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28440 len:8 PRP1 0x0 PRP2 0x0 00:23:14.096 [2024-07-15 20:48:41.745630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.096 [2024-07-15 20:48:41.745637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.096 [2024-07-15 20:48:41.745642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.745648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28448 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.745654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.745661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.745668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.745673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28456 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.745681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.745687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.745692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.745697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28464 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.745704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.745711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.745715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.745721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28472 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.745727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28480 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28488 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28496 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28504 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28512 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28520 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28528 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28536 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.097 [2024-07-15 20:48:41.756765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.097 [2024-07-15 20:48:41.756771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28544 len:8 PRP1 0x0 PRP2 0x0 00:23:14.097 [2024-07-15 20:48:41.756778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756822] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc22b80 was disconnected and freed. reset controller. 00:23:14.097 [2024-07-15 20:48:41.756833] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:14.097 [2024-07-15 20:48:41.756857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.097 [2024-07-15 20:48:41.756865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.097 [2024-07-15 20:48:41.756880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.097 [2024-07-15 20:48:41.756895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.097 [2024-07-15 20:48:41.756908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.097 [2024-07-15 20:48:41.756915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:14.097 [2024-07-15 20:48:41.756937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc27540 (9): Bad file descriptor 00:23:14.097 [2024-07-15 20:48:41.759850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.097 [2024-07-15 20:48:41.834261] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:14.097 00:23:14.097 Latency(us) 00:23:14.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.097 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:14.097 Verification LBA range: start 0x0 length 0x4000 00:23:14.097 NVMe0n1 : 15.01 10951.44 42.78 383.49 0.00 11270.30 829.89 21655.37 00:23:14.097 =================================================================================================================== 00:23:14.097 Total : 10951.44 42.78 383.49 0.00 11270.30 829.89 21655.37 00:23:14.097 Received shutdown signal, test time was about 15.000000 seconds 00:23:14.097 00:23:14.097 Latency(us) 00:23:14.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.097 =================================================================================================================== 00:23:14.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2786085 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2786085 /var/tmp/bdevperf.sock 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2786085 ']' 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.097 20:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.354 20:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.354 20:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:14.354 20:48:48 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.611 [2024-07-15 20:48:48.987791] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.611 20:48:49 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:14.868 [2024-07-15 20:48:49.160273] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:14.868 20:48:49 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.125 NVMe0n1 00:23:15.125 20:48:49 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.382 00:23:15.639 20:48:49 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.897 00:23:15.897 20:48:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.897 20:48:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:16.154 20:48:50 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:16.154 20:48:50 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:19.426 20:48:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.426 20:48:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:19.426 20:48:53 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2787009 00:23:19.426 20:48:53 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.426 20:48:53 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2787009 00:23:20.837 0 00:23:20.837 20:48:54 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:20.837 [2024-07-15 20:48:48.022674] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:20.837 [2024-07-15 20:48:48.022722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786085 ] 00:23:20.837 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.837 [2024-07-15 20:48:48.075789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.837 [2024-07-15 20:48:48.145932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.837 [2024-07-15 20:48:50.555547] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:20.837 [2024-07-15 20:48:50.555596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.837 [2024-07-15 20:48:50.555607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.837 [2024-07-15 20:48:50.555616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.837 [2024-07-15 20:48:50.555623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.837 [2024-07-15 20:48:50.555631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.837 [2024-07-15 20:48:50.555638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.837 [2024-07-15 20:48:50.555646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.837 [2024-07-15 20:48:50.555653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.837 [2024-07-15 20:48:50.555660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.837 [2024-07-15 20:48:50.555688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.837 [2024-07-15 20:48:50.555702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812540 (9): Bad file descriptor 00:23:20.837 [2024-07-15 20:48:50.606435] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:20.837 Running I/O for 1 seconds... 00:23:20.837 00:23:20.837 Latency(us) 00:23:20.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.837 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:20.837 Verification LBA range: start 0x0 length 0x4000 00:23:20.837 NVMe0n1 : 1.01 10865.30 42.44 0.00 0.00 11733.11 2322.25 15158.76 00:23:20.837 =================================================================================================================== 00:23:20.837 Total : 10865.30 42.44 0.00 0.00 11733.11 2322.25 15158.76 00:23:20.837 20:48:54 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.837 20:48:54 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:20.837 20:48:55 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.837 20:48:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.837 20:48:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:21.119 20:48:55 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.377 20:48:55 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2786085 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2786085 ']' 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2786085 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2786085 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2786085' 00:23:24.654 killing process with pid 2786085 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2786085 00:23:24.654 20:48:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2786085 00:23:24.654 20:48:59 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:24.654 20:48:59 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.911 rmmod nvme_tcp 00:23:24.911 rmmod nvme_fabrics 00:23:24.911 rmmod nvme_keyring 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2783059 ']' 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2783059 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2783059 ']' 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2783059 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.911 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2783059 00:23:24.912 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:24.912 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:24.912 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2783059' 00:23:24.912 killing process with pid 2783059 00:23:24.912 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2783059 00:23:24.912 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2783059 00:23:25.169 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.169 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:25.169 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:25.169 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.169 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.169 20:48:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.169 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.169 20:48:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.702 20:49:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.702 00:23:27.702 real 0m38.074s 00:23:27.702 user 2m3.319s 00:23:27.702 sys 0m7.298s 00:23:27.702 20:49:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:27.702 20:49:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:27.702 ************************************ 00:23:27.702 END TEST nvmf_failover 00:23:27.702 ************************************ 00:23:27.702 20:49:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:27.702 20:49:01 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:27.702 20:49:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:27.702 20:49:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.702 20:49:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.702 ************************************ 00:23:27.702 START TEST nvmf_host_discovery 00:23:27.702 ************************************ 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:27.702 * Looking for test storage... 00:23:27.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.702 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.703 20:49:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:32.967 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:32.967 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.967 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:32.968 Found net devices under 0000:86:00.0: cvl_0_0 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:32.968 Found net devices under 0000:86:00.1: cvl_0_1 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:23:32.968 00:23:32.968 --- 10.0.0.2 ping statistics --- 00:23:32.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.968 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:23:32.968 00:23:32.968 --- 10.0.0.1 ping statistics --- 00:23:32.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.968 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2791228 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2791228 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2791228 ']' 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.968 20:49:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.968 [2024-07-15 20:49:06.749726] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:32.968 [2024-07-15 20:49:06.749768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.968 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.968 [2024-07-15 20:49:06.806873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.968 [2024-07-15 20:49:06.887665] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.968 [2024-07-15 20:49:06.887696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.968 [2024-07-15 20:49:06.887703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.968 [2024-07-15 20:49:06.887709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.968 [2024-07-15 20:49:06.887715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.968 [2024-07-15 20:49:06.887741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.227 [2024-07-15 20:49:07.590450] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.227 [2024-07-15 20:49:07.598575] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.227 null0 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.227 null1 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2791470 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2791470 /tmp/host.sock 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2791470 ']' 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:33.227 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.227 20:49:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.227 [2024-07-15 20:49:07.673921] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:33.227 [2024-07-15 20:49:07.673962] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2791470 ] 00:23:33.227 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.486 [2024-07-15 20:49:07.727105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.486 [2024-07-15 20:49:07.800479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:34.052 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.310 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.569 [2024-07-15 20:49:08.805770] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.569 20:49:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.569 20:49:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:34.569 20:49:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:35.136 [2024-07-15 20:49:09.529789] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:35.136 [2024-07-15 20:49:09.529809] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:35.136 [2024-07-15 20:49:09.529821] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:35.136 [2024-07-15 20:49:09.617090] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:35.394 [2024-07-15 20:49:09.679989] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:35.394 [2024-07-15 20:49:09.680009] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.653 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.923 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:36.181 20:49:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:37.112 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.113 [2024-07-15 20:49:11.561333] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:37.113 [2024-07-15 20:49:11.561823] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:37.113 [2024-07-15 20:49:11.561844] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.113 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.370 [2024-07-15 20:49:11.649098] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:37.370 20:49:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:37.370 [2024-07-15 20:49:11.747777] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:37.370 [2024-07-15 20:49:11.747795] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:37.370 [2024-07-15 20:49:11.747800] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.300 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.559 [2024-07-15 20:49:12.820865] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:38.559 [2024-07-15 20:49:12.820888] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.559 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:38.560 [2024-07-15 20:49:12.827038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.560 [2024-07-15 20:49:12.827056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.560 [2024-07-15 20:49:12.827064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.560 [2024-07-15 20:49:12.827072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.560 [2024-07-15 20:49:12.827079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.560 [2024-07-15 20:49:12.827086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.560 [2024-07-15 20:49:12.827092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.560 [2024-07-15 20:49:12.827099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.560 [2024-07-15 20:49:12.827106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f2f10 is same with the state(5) to be set 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 [2024-07-15 20:49:12.837052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f2f10 (9): Bad file descriptor 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.560 [2024-07-15 20:49:12.847089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.560 [2024-07-15 20:49:12.847322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.560 [2024-07-15 20:49:12.847337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2f10 with addr=10.0.0.2, port=4420 00:23:38.560 [2024-07-15 20:49:12.847345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f2f10 is same with the state(5) to be set 00:23:38.560 [2024-07-15 20:49:12.847357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f2f10 (9): Bad file descriptor 00:23:38.560 [2024-07-15 20:49:12.847368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.560 [2024-07-15 20:49:12.847376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.560 [2024-07-15 20:49:12.847383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.560 [2024-07-15 20:49:12.847393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.560 [2024-07-15 20:49:12.857143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.560 [2024-07-15 20:49:12.857413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.560 [2024-07-15 20:49:12.857428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2f10 with addr=10.0.0.2, port=4420 00:23:38.560 [2024-07-15 20:49:12.857436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f2f10 is same with the state(5) to be set 00:23:38.560 [2024-07-15 20:49:12.857447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f2f10 (9): Bad file descriptor 00:23:38.560 [2024-07-15 20:49:12.857457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.560 [2024-07-15 20:49:12.857463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.560 [2024-07-15 20:49:12.857470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.560 [2024-07-15 20:49:12.857480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.560 [2024-07-15 20:49:12.867195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.560 [2024-07-15 20:49:12.867501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.560 [2024-07-15 20:49:12.867518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2f10 with addr=10.0.0.2, port=4420 00:23:38.560 [2024-07-15 20:49:12.867526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f2f10 is same with the state(5) to be set 00:23:38.560 [2024-07-15 20:49:12.867538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f2f10 (9): Bad file descriptor 00:23:38.560 [2024-07-15 20:49:12.867549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.560 [2024-07-15 20:49:12.867556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.560 [2024-07-15 20:49:12.867563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.560 [2024-07-15 20:49:12.867577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:38.560 [2024-07-15 20:49:12.877254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.560 [2024-07-15 20:49:12.877369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.560 [2024-07-15 20:49:12.877381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2f10 with addr=10.0.0.2, port=4420 00:23:38.560 [2024-07-15 20:49:12.877388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f2f10 is same with the state(5) to be set 00:23:38.560 [2024-07-15 20:49:12.877399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f2f10 (9): Bad file descriptor 00:23:38.560 [2024-07-15 20:49:12.877408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.560 [2024-07-15 20:49:12.877415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.560 [2024-07-15 20:49:12.877421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.560 [2024-07-15 20:49:12.877430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.560 [2024-07-15 20:49:12.887308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.560 [2024-07-15 20:49:12.887576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.560 [2024-07-15 20:49:12.887590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2f10 with addr=10.0.0.2, port=4420 00:23:38.560 [2024-07-15 20:49:12.887598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f2f10 is same with the state(5) to be set 00:23:38.560 [2024-07-15 20:49:12.887609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f2f10 (9): Bad file descriptor 00:23:38.560 [2024-07-15 20:49:12.887627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.560 [2024-07-15 20:49:12.887634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.560 [2024-07-15 20:49:12.887641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.560 [2024-07-15 20:49:12.887650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.560 [2024-07-15 20:49:12.897362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.560 [2024-07-15 20:49:12.897671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.560 [2024-07-15 20:49:12.897687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2f10 with addr=10.0.0.2, port=4420 00:23:38.560 [2024-07-15 20:49:12.897696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f2f10 is same with the state(5) to be set 00:23:38.560 [2024-07-15 20:49:12.897706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f2f10 (9): Bad file descriptor 00:23:38.560 [2024-07-15 20:49:12.897723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.560 [2024-07-15 20:49:12.897730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.560 [2024-07-15 20:49:12.897737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.560 [2024-07-15 20:49:12.897746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.560 [2024-07-15 20:49:12.906753] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:38.560 [2024-07-15 20:49:12.906768] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:38.560 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.561 20:49:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.561 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.819 20:49:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.750 [2024-07-15 20:49:14.200042] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:39.751 [2024-07-15 20:49:14.200059] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:39.751 [2024-07-15 20:49:14.200070] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.008 [2024-07-15 20:49:14.288344] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:40.008 [2024-07-15 20:49:14.396917] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:40.008 [2024-07-15 20:49:14.396944] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.008 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.008 request: 00:23:40.008 { 00:23:40.008 "name": "nvme", 00:23:40.009 "trtype": "tcp", 00:23:40.009 "traddr": "10.0.0.2", 00:23:40.009 "adrfam": "ipv4", 00:23:40.009 "trsvcid": "8009", 00:23:40.009 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:40.009 "wait_for_attach": true, 00:23:40.009 "method": "bdev_nvme_start_discovery", 00:23:40.009 "req_id": 1 00:23:40.009 } 00:23:40.009 Got JSON-RPC error response 00:23:40.009 response: 00:23:40.009 { 00:23:40.009 "code": -17, 00:23:40.009 "message": "File exists" 00:23:40.009 } 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.009 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.267 request: 00:23:40.267 { 00:23:40.267 "name": "nvme_second", 00:23:40.267 "trtype": "tcp", 00:23:40.267 "traddr": "10.0.0.2", 00:23:40.267 "adrfam": "ipv4", 00:23:40.267 "trsvcid": "8009", 00:23:40.267 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:40.267 "wait_for_attach": true, 00:23:40.267 "method": "bdev_nvme_start_discovery", 00:23:40.267 "req_id": 1 00:23:40.267 } 00:23:40.267 Got JSON-RPC error response 00:23:40.267 response: 00:23:40.267 { 00:23:40.267 "code": -17, 00:23:40.267 "message": "File exists" 00:23:40.267 } 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.267 20:49:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.238 [2024-07-15 20:49:15.636422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.238 [2024-07-15 20:49:15.636451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62fa00 with addr=10.0.0.2, port=8010 00:23:41.238 [2024-07-15 20:49:15.636463] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:41.238 [2024-07-15 20:49:15.636473] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:41.238 [2024-07-15 20:49:15.636479] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:42.189 [2024-07-15 20:49:16.638873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.189 [2024-07-15 20:49:16.638897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62fa00 with addr=10.0.0.2, port=8010 00:23:42.189 [2024-07-15 20:49:16.638908] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:42.189 [2024-07-15 20:49:16.638915] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:42.189 [2024-07-15 20:49:16.638920] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:43.564 [2024-07-15 20:49:17.641015] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:43.564 request: 00:23:43.564 { 00:23:43.564 "name": "nvme_second", 00:23:43.564 "trtype": "tcp", 00:23:43.564 "traddr": "10.0.0.2", 00:23:43.564 "adrfam": "ipv4", 00:23:43.564 "trsvcid": "8010", 00:23:43.564 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:43.564 "wait_for_attach": false, 00:23:43.564 "attach_timeout_ms": 3000, 00:23:43.564 "method": "bdev_nvme_start_discovery", 00:23:43.564 "req_id": 1 00:23:43.564 } 00:23:43.564 Got JSON-RPC error response 00:23:43.564 response: 00:23:43.564 { 00:23:43.564 "code": -110, 00:23:43.564 "message": "Connection timed out" 00:23:43.564 } 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2791470 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.564 rmmod nvme_tcp 00:23:43.564 rmmod nvme_fabrics 00:23:43.564 rmmod nvme_keyring 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2791228 ']' 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2791228 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2791228 ']' 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2791228 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2791228 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2791228' 00:23:43.564 killing process with pid 2791228 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2791228 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2791228 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.564 20:49:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.093 00:23:46.093 real 0m18.370s 00:23:46.093 user 0m24.180s 00:23:46.093 sys 0m5.181s 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.093 ************************************ 00:23:46.093 END TEST nvmf_host_discovery 00:23:46.093 ************************************ 00:23:46.093 20:49:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:46.093 20:49:20 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:46.093 20:49:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:46.093 20:49:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.093 20:49:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:46.093 ************************************ 00:23:46.093 START TEST nvmf_host_multipath_status 00:23:46.093 ************************************ 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:46.093 * Looking for test storage... 00:23:46.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.093 20:49:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:51.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:51.349 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:51.349 Found net devices under 0000:86:00.0: cvl_0_0 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.349 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:51.350 Found net devices under 0000:86:00.1: cvl_0_1 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:23:51.350 00:23:51.350 --- 10.0.0.2 ping statistics --- 00:23:51.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.350 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:23:51.350 00:23:51.350 --- 10.0.0.1 ping statistics --- 00:23:51.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.350 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2796568 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2796568 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2796568 ']' 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.350 20:49:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.350 [2024-07-15 20:49:25.371034] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:23:51.350 [2024-07-15 20:49:25.371076] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.350 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.350 [2024-07-15 20:49:25.427469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:51.350 [2024-07-15 20:49:25.507111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.350 [2024-07-15 20:49:25.507147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.350 [2024-07-15 20:49:25.507154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.350 [2024-07-15 20:49:25.507161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.350 [2024-07-15 20:49:25.507166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.350 [2024-07-15 20:49:25.507205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.350 [2024-07-15 20:49:25.507208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2796568 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:51.912 [2024-07-15 20:49:26.367596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.912 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:52.169 Malloc0 00:23:52.169 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:52.427 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.684 20:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.684 [2024-07-15 20:49:27.099434] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.684 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:52.941 [2024-07-15 20:49:27.271872] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2797024 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2797024 /var/tmp/bdevperf.sock 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2797024 ']' 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.941 20:49:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:53.874 20:49:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.874 20:49:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:53.874 20:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:53.874 20:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:54.148 Nvme0n1 00:23:54.148 20:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:54.714 Nvme0n1 00:23:54.714 20:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:54.714 20:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:56.615 20:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:56.615 20:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:56.873 20:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.131 20:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:58.066 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:58.066 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:58.066 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.066 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.324 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.324 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:58.324 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.324 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.583 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.583 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.583 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.583 20:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.583 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.583 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.583 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.583 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.842 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.842 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.842 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.842 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.101 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.101 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.101 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.101 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.360 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.360 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:59.360 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:59.360 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:59.619 20:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:00.556 20:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:00.556 20:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:00.556 20:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.556 20:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.883 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.883 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:00.883 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.883 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.883 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.883 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.141 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.141 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.141 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.141 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.141 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.141 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.400 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.400 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.400 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.400 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.659 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.659 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.659 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.659 20:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.659 20:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.659 20:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:01.659 20:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:01.918 20:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:02.176 20:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:03.111 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:03.111 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.111 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.111 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.369 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.369 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.369 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.369 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.627 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.627 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.627 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.627 20:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.627 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.627 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.627 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.627 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.886 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.886 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.886 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.886 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.144 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.144 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.144 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.144 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.144 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.144 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:04.144 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.403 20:49:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:04.661 20:49:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:05.596 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:05.596 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:05.596 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.596 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.855 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.855 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:05.855 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.855 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.114 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.114 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.114 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.114 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.373 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.373 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.373 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.373 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.373 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.373 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.373 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.373 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.632 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.632 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:06.632 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.632 20:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.891 20:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.891 20:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:06.891 20:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:06.891 20:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:07.150 20:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:08.087 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:08.087 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:08.087 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.087 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.346 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.346 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:08.346 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.346 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.605 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.605 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.605 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.605 20:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.605 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.605 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.605 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.605 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.865 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.865 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:08.865 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.865 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.123 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.123 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:09.123 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.123 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.123 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.123 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:09.123 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:09.382 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:09.640 20:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:10.575 20:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:10.575 20:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.575 20:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.575 20:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.879 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.879 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:10.879 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.879 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.879 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.879 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.879 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.879 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.137 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.137 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.137 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.137 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.394 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.395 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:11.395 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.395 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.651 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.652 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:11.652 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.652 20:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.652 20:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.652 20:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:11.909 20:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:11.909 20:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:12.167 20:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.426 20:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:13.363 20:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:13.363 20:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.363 20:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.363 20:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.622 20:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.622 20:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:13.622 20:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.622 20:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.622 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.622 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.622 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.622 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.881 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.881 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.881 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.881 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.140 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.140 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.140 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.140 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.140 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.140 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.140 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.140 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.399 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.399 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:14.400 20:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:14.658 20:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.917 20:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:15.891 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:15.891 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:15.891 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.891 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.150 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.150 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.150 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.150 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.150 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.150 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.150 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.150 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.408 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.408 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.408 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.408 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.667 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.667 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.667 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.667 20:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.926 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.926 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.926 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.926 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.926 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.926 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:16.926 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.186 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:17.445 20:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:18.383 20:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:18.383 20:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:18.383 20:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.383 20:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.642 20:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.642 20:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.642 20:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.642 20:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.642 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.642 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.642 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.642 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:18.902 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.902 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:18.902 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.902 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.160 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.160 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.160 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.160 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.419 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.419 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.419 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.419 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.419 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.419 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:19.419 20:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:19.679 20:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:19.938 20:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:20.875 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:20.875 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:20.875 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.875 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.134 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.134 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.134 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.134 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.393 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.393 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.393 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.393 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.393 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.393 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.393 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.393 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:21.651 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.651 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:21.651 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:21.651 20:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2797024 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2797024 ']' 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2797024 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.909 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2797024 00:24:22.169 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:22.169 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:22.169 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2797024' 00:24:22.169 killing process with pid 2797024 00:24:22.169 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2797024 00:24:22.169 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2797024 00:24:22.169 Connection closed with partial response: 00:24:22.169 00:24:22.169 00:24:22.169 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2797024 00:24:22.169 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.169 [2024-07-15 20:49:27.333535] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:24:22.169 [2024-07-15 20:49:27.333588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797024 ] 00:24:22.169 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.169 [2024-07-15 20:49:27.384531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.169 [2024-07-15 20:49:27.460961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.169 Running I/O for 90 seconds... 00:24:22.169 [2024-07-15 20:49:41.330736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.330776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.330942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.330956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.330971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.330979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.330993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.169 [2024-07-15 20:49:41.331516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:22.169 [2024-07-15 20:49:41.331529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.331893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.331914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.331935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.331954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.331974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.331986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.331993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.332005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.332012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.332026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.332033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:41.333620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.333980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.333987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.334006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.334013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:41.334032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.170 [2024-07-15 20:49:41.334039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:54.209045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.170 [2024-07-15 20:49:54.209083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:22.170 [2024-07-15 20:49:54.209118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.209410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.209417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.210985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.210991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.171 [2024-07-15 20:49:54.211189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.171 [2024-07-15 20:49:54.211871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.171 [2024-07-15 20:49:54.211897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.171 [2024-07-15 20:49:54.211917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.171 [2024-07-15 20:49:54.211936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:22.171 [2024-07-15 20:49:54.211949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.171 [2024-07-15 20:49:54.211955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:22.172 [2024-07-15 20:49:54.211968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.172 [2024-07-15 20:49:54.211975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:22.172 Received shutdown signal, test time was about 27.213302 seconds 00:24:22.172 00:24:22.172 Latency(us) 00:24:22.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.172 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:22.172 Verification LBA range: start 0x0 length 0x4000 00:24:22.172 Nvme0n1 : 27.21 10310.28 40.27 0.00 0.00 12393.25 267.13 3019898.88 00:24:22.172 =================================================================================================================== 00:24:22.172 Total : 10310.28 40.27 0.00 0.00 12393.25 267.13 3019898.88 00:24:22.172 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.430 rmmod nvme_tcp 00:24:22.430 rmmod nvme_fabrics 00:24:22.430 rmmod nvme_keyring 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2796568 ']' 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2796568 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2796568 ']' 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2796568 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2796568 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2796568' 00:24:22.430 killing process with pid 2796568 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2796568 00:24:22.430 20:49:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2796568 00:24:22.689 20:49:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.689 20:49:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.689 20:49:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.689 20:49:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.689 20:49:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.689 20:49:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.689 20:49:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.689 20:49:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.225 20:49:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.225 00:24:25.225 real 0m39.038s 00:24:25.225 user 1m46.537s 00:24:25.225 sys 0m10.251s 00:24:25.225 20:49:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.225 20:49:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:25.225 ************************************ 00:24:25.225 END TEST nvmf_host_multipath_status 00:24:25.225 ************************************ 00:24:25.225 20:49:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.225 20:49:59 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:25.225 20:49:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.225 20:49:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.225 20:49:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.225 ************************************ 00:24:25.225 START TEST nvmf_discovery_remove_ifc 00:24:25.225 ************************************ 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:25.225 * Looking for test storage... 00:24:25.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.225 20:49:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.517 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:30.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:30.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:30.518 Found net devices under 0000:86:00.0: cvl_0_0 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:30.518 Found net devices under 0000:86:00.1: cvl_0_1 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:24:30.518 00:24:30.518 --- 10.0.0.2 ping statistics --- 00:24:30.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.518 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:24:30.518 00:24:30.518 --- 10.0.0.1 ping statistics --- 00:24:30.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.518 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2805330 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2805330 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2805330 ']' 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.518 20:50:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.518 [2024-07-15 20:50:04.865185] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:24:30.518 [2024-07-15 20:50:04.865233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.518 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.518 [2024-07-15 20:50:04.916784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.518 [2024-07-15 20:50:04.994875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.518 [2024-07-15 20:50:04.994908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.518 [2024-07-15 20:50:04.994916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.518 [2024-07-15 20:50:04.994922] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.518 [2024-07-15 20:50:04.994927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.518 [2024-07-15 20:50:04.994943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.455 [2024-07-15 20:50:05.709017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.455 [2024-07-15 20:50:05.717138] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:31.455 null0 00:24:31.455 [2024-07-15 20:50:05.749143] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2805572 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2805572 /tmp/host.sock 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2805572 ']' 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:31.455 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.455 20:50:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.455 [2024-07-15 20:50:05.814659] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:24:31.455 [2024-07-15 20:50:05.814699] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805572 ] 00:24:31.455 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.455 [2024-07-15 20:50:05.866507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.714 [2024-07-15 20:50:05.939977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.280 20:50:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.657 [2024-07-15 20:50:07.727040] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:33.657 [2024-07-15 20:50:07.727060] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:33.657 [2024-07-15 20:50:07.727072] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:33.657 [2024-07-15 20:50:07.815348] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:33.657 [2024-07-15 20:50:07.960581] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:33.657 [2024-07-15 20:50:07.960625] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:33.657 [2024-07-15 20:50:07.960645] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:33.657 [2024-07-15 20:50:07.960657] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:33.657 [2024-07-15 20:50:07.960675] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:33.657 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.658 [2024-07-15 20:50:07.966330] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ab7e30 was disconnected and freed. delete nvme_qpair. 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.658 20:50:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.658 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.934 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.934 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.934 20:50:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.927 20:50:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.864 20:50:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.800 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.801 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.801 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.801 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.801 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.801 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.801 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.059 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.059 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.059 20:50:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.996 20:50:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.934 [2024-07-15 20:50:13.401830] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:38.934 [2024-07-15 20:50:13.401868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.934 [2024-07-15 20:50:13.401878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.934 [2024-07-15 20:50:13.401886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.934 [2024-07-15 20:50:13.401892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.934 [2024-07-15 20:50:13.401899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.934 [2024-07-15 20:50:13.401905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.934 [2024-07-15 20:50:13.401913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.934 [2024-07-15 20:50:13.401919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.934 [2024-07-15 20:50:13.401925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.934 [2024-07-15 20:50:13.401932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.934 [2024-07-15 20:50:13.401938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e690 is same with the state(5) to be set 00:24:38.934 [2024-07-15 20:50:13.411853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7e690 (9): Bad file descriptor 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.934 20:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.194 [2024-07-15 20:50:13.421892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:40.129 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.129 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.129 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.129 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.129 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.129 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.129 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.129 [2024-07-15 20:50:14.434242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:40.129 [2024-07-15 20:50:14.434278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7e690 with addr=10.0.0.2, port=4420 00:24:40.129 [2024-07-15 20:50:14.434291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e690 is same with the state(5) to be set 00:24:40.129 [2024-07-15 20:50:14.434315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7e690 (9): Bad file descriptor 00:24:40.129 [2024-07-15 20:50:14.434712] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.129 [2024-07-15 20:50:14.434732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:40.129 [2024-07-15 20:50:14.434742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:40.130 [2024-07-15 20:50:14.434752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:40.130 [2024-07-15 20:50:14.434769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.130 [2024-07-15 20:50:14.434779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:40.130 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.130 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.130 20:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.066 [2024-07-15 20:50:15.437258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:41.066 [2024-07-15 20:50:15.437279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:41.066 [2024-07-15 20:50:15.437287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:41.066 [2024-07-15 20:50:15.437294] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:41.066 [2024-07-15 20:50:15.437306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.066 [2024-07-15 20:50:15.437323] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:41.066 [2024-07-15 20:50:15.437340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.066 [2024-07-15 20:50:15.437349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.066 [2024-07-15 20:50:15.437358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.066 [2024-07-15 20:50:15.437365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.066 [2024-07-15 20:50:15.437372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.066 [2024-07-15 20:50:15.437382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.066 [2024-07-15 20:50:15.437390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.066 [2024-07-15 20:50:15.437397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.066 [2024-07-15 20:50:15.437405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.066 [2024-07-15 20:50:15.437411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.066 [2024-07-15 20:50:15.437418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:41.066 [2024-07-15 20:50:15.437547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7da80 (9): Bad file descriptor 00:24:41.066 [2024-07-15 20:50:15.438555] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:41.066 [2024-07-15 20:50:15.438567] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.066 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:41.325 20:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:42.263 20:50:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.199 [2024-07-15 20:50:17.451835] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:43.199 [2024-07-15 20:50:17.451852] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:43.199 [2024-07-15 20:50:17.451864] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:43.199 [2024-07-15 20:50:17.578250] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:43.199 [2024-07-15 20:50:17.634454] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:43.199 [2024-07-15 20:50:17.634486] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:43.199 [2024-07-15 20:50:17.634503] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:43.199 [2024-07-15 20:50:17.634515] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:43.199 [2024-07-15 20:50:17.634522] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:43.199 [2024-07-15 20:50:17.641140] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a948d0 was disconnected and freed. delete nvme_qpair. 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2805572 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2805572 ']' 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2805572 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805572 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805572' 00:24:43.458 killing process with pid 2805572 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2805572 00:24:43.458 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2805572 00:24:43.716 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:43.716 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:43.716 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:43.716 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.716 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:43.716 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.716 20:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.716 rmmod nvme_tcp 00:24:43.716 rmmod nvme_fabrics 00:24:43.716 rmmod nvme_keyring 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2805330 ']' 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2805330 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2805330 ']' 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2805330 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805330 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805330' 00:24:43.716 killing process with pid 2805330 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2805330 00:24:43.716 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2805330 00:24:43.975 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.975 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.975 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.975 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.975 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.975 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.975 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.975 20:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.881 20:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.881 00:24:45.881 real 0m21.093s 00:24:45.881 user 0m26.628s 00:24:45.881 sys 0m5.337s 00:24:45.881 20:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.881 20:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.881 ************************************ 00:24:45.881 END TEST nvmf_discovery_remove_ifc 00:24:45.881 ************************************ 00:24:45.881 20:50:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:45.881 20:50:20 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:45.881 20:50:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.881 20:50:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.881 20:50:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:46.140 ************************************ 00:24:46.140 START TEST nvmf_identify_kernel_target 00:24:46.140 ************************************ 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:46.140 * Looking for test storage... 00:24:46.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:46.140 20:50:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.408 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.408 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.408 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.408 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:51.409 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:51.409 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:51.409 Found net devices under 0000:86:00.0: cvl_0_0 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:51.409 Found net devices under 0000:86:00.1: cvl_0_1 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:51.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:24:51.409 00:24:51.409 --- 10.0.0.2 ping statistics --- 00:24:51.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.409 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:24:51.409 00:24:51.409 --- 10.0.0.1 ping statistics --- 00:24:51.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.409 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:51.409 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:51.410 20:50:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:53.944 Waiting for block devices as requested 00:24:53.944 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:53.944 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:53.944 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:53.944 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:53.944 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:53.944 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:53.944 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:53.944 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:54.241 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:54.241 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:54.241 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:54.241 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:54.499 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:54.499 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:54.499 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:54.758 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:54.758 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:54.758 No valid GPT data, bailing 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:54.758 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:54.759 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:55.019 00:24:55.019 Discovery Log Number of Records 2, Generation counter 2 00:24:55.019 =====Discovery Log Entry 0====== 00:24:55.019 trtype: tcp 00:24:55.019 adrfam: ipv4 00:24:55.019 subtype: current discovery subsystem 00:24:55.019 treq: not specified, sq flow control disable supported 00:24:55.019 portid: 1 00:24:55.019 trsvcid: 4420 00:24:55.019 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:55.019 traddr: 10.0.0.1 00:24:55.019 eflags: none 00:24:55.019 sectype: none 00:24:55.019 =====Discovery Log Entry 1====== 00:24:55.019 trtype: tcp 00:24:55.019 adrfam: ipv4 00:24:55.019 subtype: nvme subsystem 00:24:55.019 treq: not specified, sq flow control disable supported 00:24:55.019 portid: 1 00:24:55.019 trsvcid: 4420 00:24:55.019 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:55.019 traddr: 10.0.0.1 00:24:55.019 eflags: none 00:24:55.019 sectype: none 00:24:55.019 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:55.019 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:55.019 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.019 ===================================================== 00:24:55.019 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:55.019 ===================================================== 00:24:55.019 Controller Capabilities/Features 00:24:55.019 ================================ 00:24:55.019 Vendor ID: 0000 00:24:55.019 Subsystem Vendor ID: 0000 00:24:55.019 Serial Number: 552f39c5a6983802661f 00:24:55.019 Model Number: Linux 00:24:55.019 Firmware Version: 6.7.0-68 00:24:55.019 Recommended Arb Burst: 0 00:24:55.019 IEEE OUI Identifier: 00 00 00 00:24:55.019 Multi-path I/O 00:24:55.019 May have multiple subsystem ports: No 00:24:55.019 May have multiple controllers: No 00:24:55.019 Associated with SR-IOV VF: No 00:24:55.019 Max Data Transfer Size: Unlimited 00:24:55.019 Max Number of Namespaces: 0 00:24:55.019 Max Number of I/O Queues: 1024 00:24:55.019 NVMe Specification Version (VS): 1.3 00:24:55.019 NVMe Specification Version (Identify): 1.3 00:24:55.019 Maximum Queue Entries: 1024 00:24:55.019 Contiguous Queues Required: No 00:24:55.019 Arbitration Mechanisms Supported 00:24:55.019 Weighted Round Robin: Not Supported 00:24:55.019 Vendor Specific: Not Supported 00:24:55.019 Reset Timeout: 7500 ms 00:24:55.019 Doorbell Stride: 4 bytes 00:24:55.019 NVM Subsystem Reset: Not Supported 00:24:55.019 Command Sets Supported 00:24:55.019 NVM Command Set: Supported 00:24:55.019 Boot Partition: Not Supported 00:24:55.019 Memory Page Size Minimum: 4096 bytes 00:24:55.019 Memory Page Size Maximum: 4096 bytes 00:24:55.019 Persistent Memory Region: Not Supported 00:24:55.019 Optional Asynchronous Events Supported 00:24:55.019 Namespace Attribute Notices: Not Supported 00:24:55.019 Firmware Activation Notices: Not Supported 00:24:55.019 ANA Change Notices: Not Supported 00:24:55.019 PLE Aggregate Log Change Notices: Not Supported 00:24:55.019 LBA Status Info Alert Notices: Not Supported 00:24:55.019 EGE Aggregate Log Change Notices: Not Supported 00:24:55.019 Normal NVM Subsystem Shutdown event: Not Supported 00:24:55.019 Zone Descriptor Change Notices: Not Supported 00:24:55.019 Discovery Log Change Notices: Supported 00:24:55.019 Controller Attributes 00:24:55.019 128-bit Host Identifier: Not Supported 00:24:55.019 Non-Operational Permissive Mode: Not Supported 00:24:55.019 NVM Sets: Not Supported 00:24:55.019 Read Recovery Levels: Not Supported 00:24:55.019 Endurance Groups: Not Supported 00:24:55.019 Predictable Latency Mode: Not Supported 00:24:55.019 Traffic Based Keep ALive: Not Supported 00:24:55.019 Namespace Granularity: Not Supported 00:24:55.019 SQ Associations: Not Supported 00:24:55.019 UUID List: Not Supported 00:24:55.019 Multi-Domain Subsystem: Not Supported 00:24:55.019 Fixed Capacity Management: Not Supported 00:24:55.019 Variable Capacity Management: Not Supported 00:24:55.019 Delete Endurance Group: Not Supported 00:24:55.019 Delete NVM Set: Not Supported 00:24:55.019 Extended LBA Formats Supported: Not Supported 00:24:55.019 Flexible Data Placement Supported: Not Supported 00:24:55.019 00:24:55.019 Controller Memory Buffer Support 00:24:55.019 ================================ 00:24:55.019 Supported: No 00:24:55.019 00:24:55.019 Persistent Memory Region Support 00:24:55.019 ================================ 00:24:55.019 Supported: No 00:24:55.019 00:24:55.019 Admin Command Set Attributes 00:24:55.019 ============================ 00:24:55.019 Security Send/Receive: Not Supported 00:24:55.019 Format NVM: Not Supported 00:24:55.019 Firmware Activate/Download: Not Supported 00:24:55.019 Namespace Management: Not Supported 00:24:55.019 Device Self-Test: Not Supported 00:24:55.019 Directives: Not Supported 00:24:55.019 NVMe-MI: Not Supported 00:24:55.019 Virtualization Management: Not Supported 00:24:55.019 Doorbell Buffer Config: Not Supported 00:24:55.019 Get LBA Status Capability: Not Supported 00:24:55.019 Command & Feature Lockdown Capability: Not Supported 00:24:55.019 Abort Command Limit: 1 00:24:55.019 Async Event Request Limit: 1 00:24:55.019 Number of Firmware Slots: N/A 00:24:55.019 Firmware Slot 1 Read-Only: N/A 00:24:55.019 Firmware Activation Without Reset: N/A 00:24:55.019 Multiple Update Detection Support: N/A 00:24:55.019 Firmware Update Granularity: No Information Provided 00:24:55.019 Per-Namespace SMART Log: No 00:24:55.019 Asymmetric Namespace Access Log Page: Not Supported 00:24:55.019 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:55.019 Command Effects Log Page: Not Supported 00:24:55.019 Get Log Page Extended Data: Supported 00:24:55.019 Telemetry Log Pages: Not Supported 00:24:55.019 Persistent Event Log Pages: Not Supported 00:24:55.019 Supported Log Pages Log Page: May Support 00:24:55.019 Commands Supported & Effects Log Page: Not Supported 00:24:55.019 Feature Identifiers & Effects Log Page:May Support 00:24:55.019 NVMe-MI Commands & Effects Log Page: May Support 00:24:55.019 Data Area 4 for Telemetry Log: Not Supported 00:24:55.019 Error Log Page Entries Supported: 1 00:24:55.019 Keep Alive: Not Supported 00:24:55.019 00:24:55.019 NVM Command Set Attributes 00:24:55.019 ========================== 00:24:55.019 Submission Queue Entry Size 00:24:55.020 Max: 1 00:24:55.020 Min: 1 00:24:55.020 Completion Queue Entry Size 00:24:55.020 Max: 1 00:24:55.020 Min: 1 00:24:55.020 Number of Namespaces: 0 00:24:55.020 Compare Command: Not Supported 00:24:55.020 Write Uncorrectable Command: Not Supported 00:24:55.020 Dataset Management Command: Not Supported 00:24:55.020 Write Zeroes Command: Not Supported 00:24:55.020 Set Features Save Field: Not Supported 00:24:55.020 Reservations: Not Supported 00:24:55.020 Timestamp: Not Supported 00:24:55.020 Copy: Not Supported 00:24:55.020 Volatile Write Cache: Not Present 00:24:55.020 Atomic Write Unit (Normal): 1 00:24:55.020 Atomic Write Unit (PFail): 1 00:24:55.020 Atomic Compare & Write Unit: 1 00:24:55.020 Fused Compare & Write: Not Supported 00:24:55.020 Scatter-Gather List 00:24:55.020 SGL Command Set: Supported 00:24:55.020 SGL Keyed: Not Supported 00:24:55.020 SGL Bit Bucket Descriptor: Not Supported 00:24:55.020 SGL Metadata Pointer: Not Supported 00:24:55.020 Oversized SGL: Not Supported 00:24:55.020 SGL Metadata Address: Not Supported 00:24:55.020 SGL Offset: Supported 00:24:55.020 Transport SGL Data Block: Not Supported 00:24:55.020 Replay Protected Memory Block: Not Supported 00:24:55.020 00:24:55.020 Firmware Slot Information 00:24:55.020 ========================= 00:24:55.020 Active slot: 0 00:24:55.020 00:24:55.020 00:24:55.020 Error Log 00:24:55.020 ========= 00:24:55.020 00:24:55.020 Active Namespaces 00:24:55.020 ================= 00:24:55.020 Discovery Log Page 00:24:55.020 ================== 00:24:55.020 Generation Counter: 2 00:24:55.020 Number of Records: 2 00:24:55.020 Record Format: 0 00:24:55.020 00:24:55.020 Discovery Log Entry 0 00:24:55.020 ---------------------- 00:24:55.020 Transport Type: 3 (TCP) 00:24:55.020 Address Family: 1 (IPv4) 00:24:55.020 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:55.020 Entry Flags: 00:24:55.020 Duplicate Returned Information: 0 00:24:55.020 Explicit Persistent Connection Support for Discovery: 0 00:24:55.020 Transport Requirements: 00:24:55.020 Secure Channel: Not Specified 00:24:55.020 Port ID: 1 (0x0001) 00:24:55.020 Controller ID: 65535 (0xffff) 00:24:55.020 Admin Max SQ Size: 32 00:24:55.020 Transport Service Identifier: 4420 00:24:55.020 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:55.020 Transport Address: 10.0.0.1 00:24:55.020 Discovery Log Entry 1 00:24:55.020 ---------------------- 00:24:55.020 Transport Type: 3 (TCP) 00:24:55.020 Address Family: 1 (IPv4) 00:24:55.020 Subsystem Type: 2 (NVM Subsystem) 00:24:55.020 Entry Flags: 00:24:55.020 Duplicate Returned Information: 0 00:24:55.020 Explicit Persistent Connection Support for Discovery: 0 00:24:55.020 Transport Requirements: 00:24:55.020 Secure Channel: Not Specified 00:24:55.020 Port ID: 1 (0x0001) 00:24:55.020 Controller ID: 65535 (0xffff) 00:24:55.020 Admin Max SQ Size: 32 00:24:55.020 Transport Service Identifier: 4420 00:24:55.020 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:55.020 Transport Address: 10.0.0.1 00:24:55.020 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:55.020 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.020 get_feature(0x01) failed 00:24:55.020 get_feature(0x02) failed 00:24:55.020 get_feature(0x04) failed 00:24:55.020 ===================================================== 00:24:55.020 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:55.020 ===================================================== 00:24:55.020 Controller Capabilities/Features 00:24:55.020 ================================ 00:24:55.020 Vendor ID: 0000 00:24:55.020 Subsystem Vendor ID: 0000 00:24:55.020 Serial Number: cc7f2eba58ddbdb31a06 00:24:55.020 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:55.020 Firmware Version: 6.7.0-68 00:24:55.020 Recommended Arb Burst: 6 00:24:55.020 IEEE OUI Identifier: 00 00 00 00:24:55.020 Multi-path I/O 00:24:55.020 May have multiple subsystem ports: Yes 00:24:55.020 May have multiple controllers: Yes 00:24:55.020 Associated with SR-IOV VF: No 00:24:55.020 Max Data Transfer Size: Unlimited 00:24:55.020 Max Number of Namespaces: 1024 00:24:55.020 Max Number of I/O Queues: 128 00:24:55.020 NVMe Specification Version (VS): 1.3 00:24:55.020 NVMe Specification Version (Identify): 1.3 00:24:55.020 Maximum Queue Entries: 1024 00:24:55.020 Contiguous Queues Required: No 00:24:55.020 Arbitration Mechanisms Supported 00:24:55.020 Weighted Round Robin: Not Supported 00:24:55.020 Vendor Specific: Not Supported 00:24:55.020 Reset Timeout: 7500 ms 00:24:55.020 Doorbell Stride: 4 bytes 00:24:55.020 NVM Subsystem Reset: Not Supported 00:24:55.020 Command Sets Supported 00:24:55.020 NVM Command Set: Supported 00:24:55.020 Boot Partition: Not Supported 00:24:55.020 Memory Page Size Minimum: 4096 bytes 00:24:55.020 Memory Page Size Maximum: 4096 bytes 00:24:55.020 Persistent Memory Region: Not Supported 00:24:55.020 Optional Asynchronous Events Supported 00:24:55.020 Namespace Attribute Notices: Supported 00:24:55.020 Firmware Activation Notices: Not Supported 00:24:55.020 ANA Change Notices: Supported 00:24:55.020 PLE Aggregate Log Change Notices: Not Supported 00:24:55.020 LBA Status Info Alert Notices: Not Supported 00:24:55.020 EGE Aggregate Log Change Notices: Not Supported 00:24:55.020 Normal NVM Subsystem Shutdown event: Not Supported 00:24:55.020 Zone Descriptor Change Notices: Not Supported 00:24:55.020 Discovery Log Change Notices: Not Supported 00:24:55.020 Controller Attributes 00:24:55.020 128-bit Host Identifier: Supported 00:24:55.020 Non-Operational Permissive Mode: Not Supported 00:24:55.020 NVM Sets: Not Supported 00:24:55.020 Read Recovery Levels: Not Supported 00:24:55.020 Endurance Groups: Not Supported 00:24:55.020 Predictable Latency Mode: Not Supported 00:24:55.020 Traffic Based Keep ALive: Supported 00:24:55.020 Namespace Granularity: Not Supported 00:24:55.020 SQ Associations: Not Supported 00:24:55.020 UUID List: Not Supported 00:24:55.020 Multi-Domain Subsystem: Not Supported 00:24:55.020 Fixed Capacity Management: Not Supported 00:24:55.020 Variable Capacity Management: Not Supported 00:24:55.020 Delete Endurance Group: Not Supported 00:24:55.020 Delete NVM Set: Not Supported 00:24:55.020 Extended LBA Formats Supported: Not Supported 00:24:55.020 Flexible Data Placement Supported: Not Supported 00:24:55.020 00:24:55.020 Controller Memory Buffer Support 00:24:55.020 ================================ 00:24:55.020 Supported: No 00:24:55.020 00:24:55.020 Persistent Memory Region Support 00:24:55.020 ================================ 00:24:55.020 Supported: No 00:24:55.020 00:24:55.020 Admin Command Set Attributes 00:24:55.020 ============================ 00:24:55.020 Security Send/Receive: Not Supported 00:24:55.020 Format NVM: Not Supported 00:24:55.020 Firmware Activate/Download: Not Supported 00:24:55.020 Namespace Management: Not Supported 00:24:55.020 Device Self-Test: Not Supported 00:24:55.020 Directives: Not Supported 00:24:55.020 NVMe-MI: Not Supported 00:24:55.020 Virtualization Management: Not Supported 00:24:55.020 Doorbell Buffer Config: Not Supported 00:24:55.020 Get LBA Status Capability: Not Supported 00:24:55.020 Command & Feature Lockdown Capability: Not Supported 00:24:55.020 Abort Command Limit: 4 00:24:55.020 Async Event Request Limit: 4 00:24:55.020 Number of Firmware Slots: N/A 00:24:55.020 Firmware Slot 1 Read-Only: N/A 00:24:55.020 Firmware Activation Without Reset: N/A 00:24:55.020 Multiple Update Detection Support: N/A 00:24:55.020 Firmware Update Granularity: No Information Provided 00:24:55.020 Per-Namespace SMART Log: Yes 00:24:55.020 Asymmetric Namespace Access Log Page: Supported 00:24:55.020 ANA Transition Time : 10 sec 00:24:55.020 00:24:55.020 Asymmetric Namespace Access Capabilities 00:24:55.020 ANA Optimized State : Supported 00:24:55.020 ANA Non-Optimized State : Supported 00:24:55.020 ANA Inaccessible State : Supported 00:24:55.020 ANA Persistent Loss State : Supported 00:24:55.020 ANA Change State : Supported 00:24:55.020 ANAGRPID is not changed : No 00:24:55.020 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:55.020 00:24:55.020 ANA Group Identifier Maximum : 128 00:24:55.020 Number of ANA Group Identifiers : 128 00:24:55.020 Max Number of Allowed Namespaces : 1024 00:24:55.020 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:55.020 Command Effects Log Page: Supported 00:24:55.020 Get Log Page Extended Data: Supported 00:24:55.020 Telemetry Log Pages: Not Supported 00:24:55.020 Persistent Event Log Pages: Not Supported 00:24:55.020 Supported Log Pages Log Page: May Support 00:24:55.020 Commands Supported & Effects Log Page: Not Supported 00:24:55.020 Feature Identifiers & Effects Log Page:May Support 00:24:55.020 NVMe-MI Commands & Effects Log Page: May Support 00:24:55.020 Data Area 4 for Telemetry Log: Not Supported 00:24:55.020 Error Log Page Entries Supported: 128 00:24:55.020 Keep Alive: Supported 00:24:55.020 Keep Alive Granularity: 1000 ms 00:24:55.020 00:24:55.020 NVM Command Set Attributes 00:24:55.020 ========================== 00:24:55.020 Submission Queue Entry Size 00:24:55.020 Max: 64 00:24:55.020 Min: 64 00:24:55.020 Completion Queue Entry Size 00:24:55.021 Max: 16 00:24:55.021 Min: 16 00:24:55.021 Number of Namespaces: 1024 00:24:55.021 Compare Command: Not Supported 00:24:55.021 Write Uncorrectable Command: Not Supported 00:24:55.021 Dataset Management Command: Supported 00:24:55.021 Write Zeroes Command: Supported 00:24:55.021 Set Features Save Field: Not Supported 00:24:55.021 Reservations: Not Supported 00:24:55.021 Timestamp: Not Supported 00:24:55.021 Copy: Not Supported 00:24:55.021 Volatile Write Cache: Present 00:24:55.021 Atomic Write Unit (Normal): 1 00:24:55.021 Atomic Write Unit (PFail): 1 00:24:55.021 Atomic Compare & Write Unit: 1 00:24:55.021 Fused Compare & Write: Not Supported 00:24:55.021 Scatter-Gather List 00:24:55.021 SGL Command Set: Supported 00:24:55.021 SGL Keyed: Not Supported 00:24:55.021 SGL Bit Bucket Descriptor: Not Supported 00:24:55.021 SGL Metadata Pointer: Not Supported 00:24:55.021 Oversized SGL: Not Supported 00:24:55.021 SGL Metadata Address: Not Supported 00:24:55.021 SGL Offset: Supported 00:24:55.021 Transport SGL Data Block: Not Supported 00:24:55.021 Replay Protected Memory Block: Not Supported 00:24:55.021 00:24:55.021 Firmware Slot Information 00:24:55.021 ========================= 00:24:55.021 Active slot: 0 00:24:55.021 00:24:55.021 Asymmetric Namespace Access 00:24:55.021 =========================== 00:24:55.021 Change Count : 0 00:24:55.021 Number of ANA Group Descriptors : 1 00:24:55.021 ANA Group Descriptor : 0 00:24:55.021 ANA Group ID : 1 00:24:55.021 Number of NSID Values : 1 00:24:55.021 Change Count : 0 00:24:55.021 ANA State : 1 00:24:55.021 Namespace Identifier : 1 00:24:55.021 00:24:55.021 Commands Supported and Effects 00:24:55.021 ============================== 00:24:55.021 Admin Commands 00:24:55.021 -------------- 00:24:55.021 Get Log Page (02h): Supported 00:24:55.021 Identify (06h): Supported 00:24:55.021 Abort (08h): Supported 00:24:55.021 Set Features (09h): Supported 00:24:55.021 Get Features (0Ah): Supported 00:24:55.021 Asynchronous Event Request (0Ch): Supported 00:24:55.021 Keep Alive (18h): Supported 00:24:55.021 I/O Commands 00:24:55.021 ------------ 00:24:55.021 Flush (00h): Supported 00:24:55.021 Write (01h): Supported LBA-Change 00:24:55.021 Read (02h): Supported 00:24:55.021 Write Zeroes (08h): Supported LBA-Change 00:24:55.021 Dataset Management (09h): Supported 00:24:55.021 00:24:55.021 Error Log 00:24:55.021 ========= 00:24:55.021 Entry: 0 00:24:55.021 Error Count: 0x3 00:24:55.021 Submission Queue Id: 0x0 00:24:55.021 Command Id: 0x5 00:24:55.021 Phase Bit: 0 00:24:55.021 Status Code: 0x2 00:24:55.021 Status Code Type: 0x0 00:24:55.021 Do Not Retry: 1 00:24:55.021 Error Location: 0x28 00:24:55.021 LBA: 0x0 00:24:55.021 Namespace: 0x0 00:24:55.021 Vendor Log Page: 0x0 00:24:55.021 ----------- 00:24:55.021 Entry: 1 00:24:55.021 Error Count: 0x2 00:24:55.021 Submission Queue Id: 0x0 00:24:55.021 Command Id: 0x5 00:24:55.021 Phase Bit: 0 00:24:55.021 Status Code: 0x2 00:24:55.021 Status Code Type: 0x0 00:24:55.021 Do Not Retry: 1 00:24:55.021 Error Location: 0x28 00:24:55.021 LBA: 0x0 00:24:55.021 Namespace: 0x0 00:24:55.021 Vendor Log Page: 0x0 00:24:55.021 ----------- 00:24:55.021 Entry: 2 00:24:55.021 Error Count: 0x1 00:24:55.021 Submission Queue Id: 0x0 00:24:55.021 Command Id: 0x4 00:24:55.021 Phase Bit: 0 00:24:55.021 Status Code: 0x2 00:24:55.021 Status Code Type: 0x0 00:24:55.021 Do Not Retry: 1 00:24:55.021 Error Location: 0x28 00:24:55.021 LBA: 0x0 00:24:55.021 Namespace: 0x0 00:24:55.021 Vendor Log Page: 0x0 00:24:55.021 00:24:55.021 Number of Queues 00:24:55.021 ================ 00:24:55.021 Number of I/O Submission Queues: 128 00:24:55.021 Number of I/O Completion Queues: 128 00:24:55.021 00:24:55.021 ZNS Specific Controller Data 00:24:55.021 ============================ 00:24:55.021 Zone Append Size Limit: 0 00:24:55.021 00:24:55.021 00:24:55.021 Active Namespaces 00:24:55.021 ================= 00:24:55.021 get_feature(0x05) failed 00:24:55.021 Namespace ID:1 00:24:55.021 Command Set Identifier: NVM (00h) 00:24:55.021 Deallocate: Supported 00:24:55.021 Deallocated/Unwritten Error: Not Supported 00:24:55.021 Deallocated Read Value: Unknown 00:24:55.021 Deallocate in Write Zeroes: Not Supported 00:24:55.021 Deallocated Guard Field: 0xFFFF 00:24:55.021 Flush: Supported 00:24:55.021 Reservation: Not Supported 00:24:55.021 Namespace Sharing Capabilities: Multiple Controllers 00:24:55.021 Size (in LBAs): 1953525168 (931GiB) 00:24:55.021 Capacity (in LBAs): 1953525168 (931GiB) 00:24:55.021 Utilization (in LBAs): 1953525168 (931GiB) 00:24:55.021 UUID: 6860a2bd-8a4d-4fad-ba02-f7352c6687ef 00:24:55.021 Thin Provisioning: Not Supported 00:24:55.021 Per-NS Atomic Units: Yes 00:24:55.021 Atomic Boundary Size (Normal): 0 00:24:55.021 Atomic Boundary Size (PFail): 0 00:24:55.021 Atomic Boundary Offset: 0 00:24:55.021 NGUID/EUI64 Never Reused: No 00:24:55.021 ANA group ID: 1 00:24:55.021 Namespace Write Protected: No 00:24:55.021 Number of LBA Formats: 1 00:24:55.021 Current LBA Format: LBA Format #00 00:24:55.021 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:55.021 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:55.021 rmmod nvme_tcp 00:24:55.021 rmmod nvme_fabrics 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.021 20:50:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:57.573 20:50:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:00.107 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:00.107 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:00.675 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:00.933 00:25:00.933 real 0m14.877s 00:25:00.933 user 0m3.599s 00:25:00.933 sys 0m7.530s 00:25:00.933 20:50:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.933 20:50:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.933 ************************************ 00:25:00.933 END TEST nvmf_identify_kernel_target 00:25:00.933 ************************************ 00:25:00.933 20:50:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:00.933 20:50:35 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:00.933 20:50:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:00.933 20:50:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.934 20:50:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:00.934 ************************************ 00:25:00.934 START TEST nvmf_auth_host 00:25:00.934 ************************************ 00:25:00.934 20:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:01.192 * Looking for test storage... 00:25:01.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:01.192 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:01.193 20:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:06.465 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:06.465 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:06.465 Found net devices under 0000:86:00.0: cvl_0_0 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:06.465 Found net devices under 0000:86:00.1: cvl_0_1 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.465 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:06.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:25:06.466 00:25:06.466 --- 10.0.0.2 ping statistics --- 00:25:06.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.466 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:25:06.466 00:25:06.466 --- 10.0.0.1 ping statistics --- 00:25:06.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.466 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2817094 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2817094 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2817094 ']' 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:06.466 20:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=26ff038cff4d44a4a84a4c2b64dde3fb 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6C9 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 26ff038cff4d44a4a84a4c2b64dde3fb 0 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 26ff038cff4d44a4a84a4c2b64dde3fb 0 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=26ff038cff4d44a4a84a4c2b64dde3fb 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6C9 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6C9 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.6C9 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1a4e2f0495c9aadfd37726b46f45c5f26828b17d5b2e96893ebc1b39085d6228 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Qyh 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1a4e2f0495c9aadfd37726b46f45c5f26828b17d5b2e96893ebc1b39085d6228 3 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1a4e2f0495c9aadfd37726b46f45c5f26828b17d5b2e96893ebc1b39085d6228 3 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1a4e2f0495c9aadfd37726b46f45c5f26828b17d5b2e96893ebc1b39085d6228 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Qyh 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Qyh 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Qyh 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8fc87d73685310916f21b3434e8ddffabd1fda1ed65d7265 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Hcr 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8fc87d73685310916f21b3434e8ddffabd1fda1ed65d7265 0 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8fc87d73685310916f21b3434e8ddffabd1fda1ed65d7265 0 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8fc87d73685310916f21b3434e8ddffabd1fda1ed65d7265 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Hcr 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Hcr 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Hcr 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c2b6f7adeb586296947bbff94048955729d657e8e8ee7a2f 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6fj 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c2b6f7adeb586296947bbff94048955729d657e8e8ee7a2f 2 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c2b6f7adeb586296947bbff94048955729d657e8e8ee7a2f 2 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c2b6f7adeb586296947bbff94048955729d657e8e8ee7a2f 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6fj 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6fj 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6fj 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=96624703908db07e0d8dd92c0392b528 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Wzg 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 96624703908db07e0d8dd92c0392b528 1 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 96624703908db07e0d8dd92c0392b528 1 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.403 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=96624703908db07e0d8dd92c0392b528 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Wzg 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Wzg 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Wzg 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d52fc56c83e33b59908a4c4be0025002 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YX5 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d52fc56c83e33b59908a4c4be0025002 1 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d52fc56c83e33b59908a4c4be0025002 1 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d52fc56c83e33b59908a4c4be0025002 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:07.404 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YX5 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YX5 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.YX5 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=70ab8d405a5d01404084f8f107a4497db88f1619548bffb6 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8A8 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 70ab8d405a5d01404084f8f107a4497db88f1619548bffb6 2 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 70ab8d405a5d01404084f8f107a4497db88f1619548bffb6 2 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=70ab8d405a5d01404084f8f107a4497db88f1619548bffb6 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8A8 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8A8 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.8A8 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57e74bf4887d806f7b46f58b9cfed1df 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yRz 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57e74bf4887d806f7b46f58b9cfed1df 0 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57e74bf4887d806f7b46f58b9cfed1df 0 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57e74bf4887d806f7b46f58b9cfed1df 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:07.663 20:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yRz 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yRz 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yRz 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb7820ce79129a7f76a4d3269e40f92fdc17121e624941b6441244173776a716 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.f0h 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb7820ce79129a7f76a4d3269e40f92fdc17121e624941b6441244173776a716 3 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb7820ce79129a7f76a4d3269e40f92fdc17121e624941b6441244173776a716 3 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb7820ce79129a7f76a4d3269e40f92fdc17121e624941b6441244173776a716 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.f0h 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.f0h 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.f0h 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2817094 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2817094 ']' 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.663 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6C9 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Qyh ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qyh 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Hcr 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6fj ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6fj 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Wzg 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.YX5 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YX5 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.8A8 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yRz ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yRz 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.f0h 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:07.923 20:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:10.457 Waiting for block devices as requested 00:25:10.716 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:10.716 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:10.716 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:10.975 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:10.975 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:10.975 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:11.234 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:11.234 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:11.234 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:11.234 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:11.493 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:11.493 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:11.493 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:11.493 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:11.751 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:11.751 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:11.751 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:12.318 20:50:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:12.577 No valid GPT data, bailing 00:25:12.577 20:50:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:12.577 20:50:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:12.577 20:50:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:12.577 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:12.577 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:12.578 00:25:12.578 Discovery Log Number of Records 2, Generation counter 2 00:25:12.578 =====Discovery Log Entry 0====== 00:25:12.578 trtype: tcp 00:25:12.578 adrfam: ipv4 00:25:12.578 subtype: current discovery subsystem 00:25:12.578 treq: not specified, sq flow control disable supported 00:25:12.578 portid: 1 00:25:12.578 trsvcid: 4420 00:25:12.578 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:12.578 traddr: 10.0.0.1 00:25:12.578 eflags: none 00:25:12.578 sectype: none 00:25:12.578 =====Discovery Log Entry 1====== 00:25:12.578 trtype: tcp 00:25:12.578 adrfam: ipv4 00:25:12.578 subtype: nvme subsystem 00:25:12.578 treq: not specified, sq flow control disable supported 00:25:12.578 portid: 1 00:25:12.578 trsvcid: 4420 00:25:12.578 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:12.578 traddr: 10.0.0.1 00:25:12.578 eflags: none 00:25:12.578 sectype: none 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.578 20:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.836 nvme0n1 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.836 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.837 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.095 nvme0n1 00:25:13.095 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.095 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.095 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.095 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.095 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.095 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.095 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.096 nvme0n1 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.096 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.358 nvme0n1 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.358 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.359 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.359 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.708 nvme0n1 00:25:13.708 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.708 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.708 20:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.708 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.708 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.708 20:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.708 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 nvme0n1 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 nvme0n1 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.227 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.227 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.227 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.227 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.228 nvme0n1 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.228 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.487 nvme0n1 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.487 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.488 20:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.745 nvme0n1 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.746 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.004 nvme0n1 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.004 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.005 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.264 nvme0n1 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.264 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.523 20:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.781 nvme0n1 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:15.781 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.782 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.040 nvme0n1 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.040 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.041 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.299 nvme0n1 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.299 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.557 nvme0n1 00:25:16.557 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.557 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.557 20:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.557 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.557 20:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.557 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.816 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.075 nvme0n1 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.075 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.076 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.076 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.076 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.643 nvme0n1 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.643 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.644 20:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.644 20:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.644 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.644 20:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.901 nvme0n1 00:25:17.901 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.901 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.901 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.901 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.901 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.901 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.159 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.416 nvme0n1 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.416 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.417 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.417 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.417 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.417 20:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.417 20:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.417 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.417 20:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.983 nvme0n1 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.983 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.984 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.550 nvme0n1 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.550 20:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.115 nvme0n1 00:25:20.115 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.115 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.115 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.115 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.115 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.116 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.374 20:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.940 nvme0n1 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.940 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.508 nvme0n1 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.508 20:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.076 nvme0n1 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.076 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.335 nvme0n1 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.335 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 nvme0n1 00:25:22.594 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.594 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.594 20:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.594 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.594 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 20:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:22.594 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.595 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.854 nvme0n1 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.854 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.855 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.855 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.855 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.855 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.855 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.855 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.114 nvme0n1 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.114 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.373 nvme0n1 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.373 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.374 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.633 nvme0n1 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.633 20:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.633 nvme0n1 00:25:23.633 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.892 nvme0n1 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.892 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.152 nvme0n1 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.152 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 nvme0n1 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.411 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:24.670 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.671 20:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.929 nvme0n1 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.929 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.930 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.189 nvme0n1 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.189 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.448 nvme0n1 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.448 20:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.707 nvme0n1 00:25:25.707 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.707 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.707 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.707 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.707 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.707 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.966 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.225 nvme0n1 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:26.225 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.226 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.485 nvme0n1 00:25:26.485 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.485 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.485 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.485 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.485 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.485 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.744 20:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.744 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.045 nvme0n1 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.045 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.046 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.614 nvme0n1 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.614 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.615 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.615 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.615 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.615 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.615 20:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.615 20:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.615 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.615 20:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.874 nvme0n1 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.874 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.134 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.393 nvme0n1 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.393 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.394 20:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.964 nvme0n1 00:25:28.964 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.964 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.964 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.964 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.964 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.964 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.224 20:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.792 nvme0n1 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.792 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.793 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.361 nvme0n1 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.361 20:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.928 nvme0n1 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.188 20:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.756 nvme0n1 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.756 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.016 nvme0n1 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.016 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.276 nvme0n1 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.276 nvme0n1 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.276 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.535 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.536 nvme0n1 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.536 20:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.536 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.536 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.536 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.536 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.794 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.795 nvme0n1 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.795 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.053 nvme0n1 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.053 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.311 nvme0n1 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.311 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.570 nvme0n1 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.570 20:51:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.828 nvme0n1 00:25:33.828 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.829 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.087 nvme0n1 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.087 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.088 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.088 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.088 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.088 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.088 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.088 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.346 nvme0n1 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.346 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.645 20:51:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.645 nvme0n1 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.645 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.903 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.161 nvme0n1 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.161 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.418 nvme0n1 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:35.418 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.419 20:51:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.676 nvme0n1 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.676 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.242 nvme0n1 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.242 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.243 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.502 nvme0n1 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.502 20:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.067 nvme0n1 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.067 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.326 nvme0n1 00:25:37.326 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.326 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.326 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.326 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.326 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.326 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.585 20:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.844 nvme0n1 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjZmZjAzOGNmZjRkNDRhNGE4NGE0YzJiNjRkZGUzZmKK5ZBx: 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: ]] 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWE0ZTJmMDQ5NWM5YWFkZmQzNzcyNmI0NmY0NWM1ZjI2ODI4YjE3ZDViMmU5Njg5M2ViYzFiMzkwODVkNjIyOM1vDHo=: 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.844 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.845 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.412 nvme0n1 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.412 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:38.670 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.671 20:51:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 nvme0n1 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTY2MjQ3MDM5MDhkYjA3ZTBkOGRkOTJjMDM5MmI1MjhPqa7/: 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDUyZmM1NmM4M2UzM2I1OTkwOGE0YzRiZTAwMjUwMDKqbf/D: 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.241 20:51:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.810 nvme0n1 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzBhYjhkNDA1YTVkMDE0MDQwODRmOGYxMDdhNDQ5N2RiODhmMTYxOTU0OGJmZmI2bxssNA==: 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTdlNzRiZjQ4ODdkODA2ZjdiNDZmNThiOWNmZWQxZGZvs7IH: 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.810 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.378 nvme0n1 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI3ODIwY2U3OTEyOWE3Zjc2YTRkMzI2OWU0MGY5MmZkYzE3MTIxZTYyNDk0MWI2NDQxMjQ0MTczNzc2YTcxNg+vwPc=: 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.378 20:51:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.944 nvme0n1 00:25:40.944 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.944 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.944 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.944 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.944 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZjODdkNzM2ODUzMTA5MTZmMjFiMzQzNGU4ZGRmZmFiZDFmZGExZWQ2NWQ3MjY1TV8/+Q==: 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: ]] 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJiNmY3YWRlYjU4NjI5Njk0N2JiZmY5NDA0ODk1NTcyOWQ2NTdlOGU4ZWU3YTJm3HUn3w==: 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:41.268 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.269 request: 00:25:41.269 { 00:25:41.269 "name": "nvme0", 00:25:41.269 "trtype": "tcp", 00:25:41.269 "traddr": "10.0.0.1", 00:25:41.269 "adrfam": "ipv4", 00:25:41.269 "trsvcid": "4420", 00:25:41.269 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:41.269 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:41.269 "prchk_reftag": false, 00:25:41.269 "prchk_guard": false, 00:25:41.269 "hdgst": false, 00:25:41.269 "ddgst": false, 00:25:41.269 "method": "bdev_nvme_attach_controller", 00:25:41.269 "req_id": 1 00:25:41.269 } 00:25:41.269 Got JSON-RPC error response 00:25:41.269 response: 00:25:41.269 { 00:25:41.269 "code": -5, 00:25:41.269 "message": "Input/output error" 00:25:41.269 } 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.269 request: 00:25:41.269 { 00:25:41.269 "name": "nvme0", 00:25:41.269 "trtype": "tcp", 00:25:41.269 "traddr": "10.0.0.1", 00:25:41.269 "adrfam": "ipv4", 00:25:41.269 "trsvcid": "4420", 00:25:41.269 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:41.269 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:41.269 "prchk_reftag": false, 00:25:41.269 "prchk_guard": false, 00:25:41.269 "hdgst": false, 00:25:41.269 "ddgst": false, 00:25:41.269 "dhchap_key": "key2", 00:25:41.269 "method": "bdev_nvme_attach_controller", 00:25:41.269 "req_id": 1 00:25:41.269 } 00:25:41.269 Got JSON-RPC error response 00:25:41.269 response: 00:25:41.269 { 00:25:41.269 "code": -5, 00:25:41.269 "message": "Input/output error" 00:25:41.269 } 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.269 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.529 request: 00:25:41.529 { 00:25:41.529 "name": "nvme0", 00:25:41.529 "trtype": "tcp", 00:25:41.529 "traddr": "10.0.0.1", 00:25:41.529 "adrfam": "ipv4", 00:25:41.529 "trsvcid": "4420", 00:25:41.529 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:41.529 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:41.529 "prchk_reftag": false, 00:25:41.529 "prchk_guard": false, 00:25:41.529 "hdgst": false, 00:25:41.529 "ddgst": false, 00:25:41.529 "dhchap_key": "key1", 00:25:41.529 "dhchap_ctrlr_key": "ckey2", 00:25:41.529 "method": "bdev_nvme_attach_controller", 00:25:41.529 "req_id": 1 00:25:41.529 } 00:25:41.529 Got JSON-RPC error response 00:25:41.529 response: 00:25:41.529 { 00:25:41.529 "code": -5, 00:25:41.529 "message": "Input/output error" 00:25:41.529 } 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:41.529 rmmod nvme_tcp 00:25:41.529 rmmod nvme_fabrics 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2817094 ']' 00:25:41.529 20:51:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2817094 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2817094 ']' 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2817094 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2817094 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2817094' 00:25:41.530 killing process with pid 2817094 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2817094 00:25:41.530 20:51:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2817094 00:25:41.787 20:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:41.787 20:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:41.787 20:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:41.787 20:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.787 20:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:41.787 20:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.787 20:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:41.787 20:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:43.691 20:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:46.223 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:46.223 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.158 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:47.158 20:51:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6C9 /tmp/spdk.key-null.Hcr /tmp/spdk.key-sha256.Wzg /tmp/spdk.key-sha384.8A8 /tmp/spdk.key-sha512.f0h /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:47.158 20:51:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:49.687 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:49.687 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:49.687 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:49.945 00:25:49.945 real 0m48.909s 00:25:49.945 user 0m44.020s 00:25:49.945 sys 0m11.402s 00:25:49.945 20:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:49.945 20:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.945 ************************************ 00:25:49.945 END TEST nvmf_auth_host 00:25:49.945 ************************************ 00:25:49.945 20:51:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:49.945 20:51:24 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:49.945 20:51:24 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:49.945 20:51:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:49.945 20:51:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.945 20:51:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.945 ************************************ 00:25:49.945 START TEST nvmf_digest 00:25:49.945 ************************************ 00:25:49.945 20:51:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:49.945 * Looking for test storage... 00:25:49.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.946 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:50.205 20:51:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:55.485 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:55.485 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:55.485 Found net devices under 0000:86:00.0: cvl_0_0 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:55.485 Found net devices under 0000:86:00.1: cvl_0_1 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.485 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:25:55.486 00:25:55.486 --- 10.0.0.2 ping statistics --- 00:25:55.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.486 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:25:55.486 00:25:55.486 --- 10.0.0.1 ping statistics --- 00:25:55.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.486 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 ************************************ 00:25:55.486 START TEST nvmf_digest_clean 00:25:55.486 ************************************ 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2830865 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2830865 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2830865 ']' 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 20:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:55.486 [2024-07-15 20:51:29.478218] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:25:55.486 [2024-07-15 20:51:29.478265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.486 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.486 [2024-07-15 20:51:29.534085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.486 [2024-07-15 20:51:29.612846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.486 [2024-07-15 20:51:29.612882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.486 [2024-07-15 20:51:29.612889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.486 [2024-07-15 20:51:29.612895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.486 [2024-07-15 20:51:29.612900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.486 [2024-07-15 20:51:29.612917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.053 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.053 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:56.053 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.053 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.053 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:56.053 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:56.054 null0 00:25:56.054 [2024-07-15 20:51:30.395804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.054 [2024-07-15 20:51:30.419969] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2830905 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2830905 /var/tmp/bperf.sock 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2830905 ']' 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:56.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:56.054 20:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:56.054 [2024-07-15 20:51:30.467864] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:25:56.054 [2024-07-15 20:51:30.467908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830905 ] 00:25:56.054 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.054 [2024-07-15 20:51:30.523516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.312 [2024-07-15 20:51:30.603350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.879 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.879 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:56.879 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:56.879 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:56.880 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:57.139 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.139 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.399 nvme0n1 00:25:57.399 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:57.399 20:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.399 Running I/O for 2 seconds... 00:25:59.930 00:25:59.930 Latency(us) 00:25:59.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.930 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:59.930 nvme0n1 : 2.04 26680.37 104.22 0.00 0.00 4722.21 2208.28 44906.41 00:25:59.930 =================================================================================================================== 00:25:59.930 Total : 26680.37 104.22 0.00 0.00 4722.21 2208.28 44906.41 00:25:59.930 0 00:25:59.930 20:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:59.930 20:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:59.930 20:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:59.930 20:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:59.930 | select(.opcode=="crc32c") 00:25:59.930 | "\(.module_name) \(.executed)"' 00:25:59.930 20:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2830905 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2830905 ']' 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2830905 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830905 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830905' 00:25:59.930 killing process with pid 2830905 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2830905 00:25:59.930 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.930 00:25:59.930 Latency(us) 00:25:59.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.930 =================================================================================================================== 00:25:59.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2830905 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:59.930 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2831600 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2831600 /var/tmp/bperf.sock 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2831600 ']' 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.931 20:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.931 [2024-07-15 20:51:34.376754] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:25:59.931 [2024-07-15 20:51:34.376802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831600 ] 00:25:59.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:59.931 Zero copy mechanism will not be used. 00:25:59.931 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.188 [2024-07-15 20:51:34.430454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.188 [2024-07-15 20:51:34.498301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.753 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.753 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:00.753 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:00.753 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:00.753 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:01.012 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.012 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.270 nvme0n1 00:26:01.270 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:01.270 20:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:01.527 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:01.527 Zero copy mechanism will not be used. 00:26:01.527 Running I/O for 2 seconds... 00:26:03.494 00:26:03.494 Latency(us) 00:26:03.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.494 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:03.494 nvme0n1 : 2.00 4842.29 605.29 0.00 0.00 3301.61 804.95 9915.88 00:26:03.494 =================================================================================================================== 00:26:03.494 Total : 4842.29 605.29 0.00 0.00 3301.61 804.95 9915.88 00:26:03.494 0 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:03.494 | select(.opcode=="crc32c") 00:26:03.494 | "\(.module_name) \(.executed)"' 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2831600 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2831600 ']' 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2831600 00:26:03.494 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:03.754 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:03.754 20:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2831600 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2831600' 00:26:03.754 killing process with pid 2831600 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2831600 00:26:03.754 Received shutdown signal, test time was about 2.000000 seconds 00:26:03.754 00:26:03.754 Latency(us) 00:26:03.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.754 =================================================================================================================== 00:26:03.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2831600 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2832296 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2832296 /var/tmp/bperf.sock 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2832296 ']' 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.754 20:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:03.754 [2024-07-15 20:51:38.229415] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:03.754 [2024-07-15 20:51:38.229463] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832296 ] 00:26:04.013 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.013 [2024-07-15 20:51:38.283821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.013 [2024-07-15 20:51:38.355690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.579 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:04.579 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:04.579 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:04.579 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:04.579 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:04.838 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.838 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.405 nvme0n1 00:26:05.405 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:05.405 20:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.405 Running I/O for 2 seconds... 00:26:07.308 00:26:07.308 Latency(us) 00:26:07.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.308 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:07.308 nvme0n1 : 2.00 27394.03 107.01 0.00 0.00 4664.60 1816.49 7465.41 00:26:07.308 =================================================================================================================== 00:26:07.308 Total : 27394.03 107.01 0.00 0.00 4664.60 1816.49 7465.41 00:26:07.308 0 00:26:07.308 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:07.308 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:07.308 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:07.308 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:07.308 | select(.opcode=="crc32c") 00:26:07.308 | "\(.module_name) \(.executed)"' 00:26:07.308 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2832296 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2832296 ']' 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2832296 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2832296 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2832296' 00:26:07.566 killing process with pid 2832296 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2832296 00:26:07.566 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.566 00:26:07.566 Latency(us) 00:26:07.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.566 =================================================================================================================== 00:26:07.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.566 20:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2832296 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2832863 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2832863 /var/tmp/bperf.sock 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2832863 ']' 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:07.825 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.825 [2024-07-15 20:51:42.138461] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:07.825 [2024-07-15 20:51:42.138510] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832863 ] 00:26:07.825 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:07.825 Zero copy mechanism will not be used. 00:26:07.825 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.825 [2024-07-15 20:51:42.192151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.825 [2024-07-15 20:51:42.270809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.760 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.760 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:08.760 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:08.760 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:08.760 20:51:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:08.760 20:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.760 20:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.326 nvme0n1 00:26:09.326 20:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:09.326 20:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.326 Zero copy mechanism will not be used. 00:26:09.326 Running I/O for 2 seconds... 00:26:11.223 00:26:11.223 Latency(us) 00:26:11.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.223 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:11.223 nvme0n1 : 2.00 5487.23 685.90 0.00 0.00 2910.52 1795.12 8947.09 00:26:11.223 =================================================================================================================== 00:26:11.223 Total : 5487.23 685.90 0.00 0.00 2910.52 1795.12 8947.09 00:26:11.223 0 00:26:11.223 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:11.223 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:11.223 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:11.223 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:11.223 | select(.opcode=="crc32c") 00:26:11.223 | "\(.module_name) \(.executed)"' 00:26:11.223 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2832863 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2832863 ']' 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2832863 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2832863 00:26:11.480 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:11.481 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:11.481 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2832863' 00:26:11.481 killing process with pid 2832863 00:26:11.481 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2832863 00:26:11.481 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.481 00:26:11.481 Latency(us) 00:26:11.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.481 =================================================================================================================== 00:26:11.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.481 20:51:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2832863 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2830865 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2830865 ']' 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2830865 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830865 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830865' 00:26:11.739 killing process with pid 2830865 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2830865 00:26:11.739 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2830865 00:26:11.998 00:26:11.998 real 0m16.824s 00:26:11.998 user 0m32.398s 00:26:11.998 sys 0m4.243s 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.998 ************************************ 00:26:11.998 END TEST nvmf_digest_clean 00:26:11.998 ************************************ 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.998 ************************************ 00:26:11.998 START TEST nvmf_digest_error 00:26:11.998 ************************************ 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2833546 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2833546 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2833546 ']' 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.998 20:51:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:11.998 [2024-07-15 20:51:46.360966] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:11.998 [2024-07-15 20:51:46.361010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.998 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.998 [2024-07-15 20:51:46.416013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.256 [2024-07-15 20:51:46.495277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.256 [2024-07-15 20:51:46.495313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.256 [2024-07-15 20:51:46.495320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.256 [2024-07-15 20:51:46.495326] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.256 [2024-07-15 20:51:46.495331] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.256 [2024-07-15 20:51:46.495354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.821 [2024-07-15 20:51:47.181491] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.821 null0 00:26:12.821 [2024-07-15 20:51:47.270796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.821 [2024-07-15 20:51:47.294970] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2833748 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2833748 /var/tmp/bperf.sock 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2833748 ']' 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:12.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.821 20:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.079 [2024-07-15 20:51:47.329723] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:13.079 [2024-07-15 20:51:47.329763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833748 ] 00:26:13.079 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.079 [2024-07-15 20:51:47.384043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.079 [2024-07-15 20:51:47.461387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.013 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.273 nvme0n1 00:26:14.273 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:14.273 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.273 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.273 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.273 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.273 20:51:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.273 Running I/O for 2 seconds... 00:26:14.273 [2024-07-15 20:51:48.638507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.638541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.638551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.648376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.648400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.648409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.658655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.658676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.658685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.667336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.667356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.667365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.676670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.676690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.676698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.685641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.685660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.685668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.695710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.695730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.695738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.705720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.705740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.705749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.715071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.715091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.715099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.725498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.725516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.725525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.733378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.733397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.733406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.743585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.743604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.743612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.273 [2024-07-15 20:51:48.753545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.273 [2024-07-15 20:51:48.753565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.273 [2024-07-15 20:51:48.753573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.763242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.763263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.763272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.771538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.771558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.771566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.781781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.781801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.781813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.790291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.790311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.790319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.800796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.800816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.800824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.810865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.810885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.810893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.819108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.819127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.819135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.829941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.829960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.829968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.838848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.838867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.838875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.848300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.848319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.848327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.857571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.857590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.857598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.867602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.867625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.867633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.877657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.877676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.877685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.886618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.886637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.886645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.895402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.895421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.895429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.905315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.905334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.905342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.915496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.915515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.915523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.923821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.923841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.923848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.933844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.933863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.933872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.942852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.942873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.942882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.952640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.952660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.533 [2024-07-15 20:51:48.952668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.533 [2024-07-15 20:51:48.961053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.533 [2024-07-15 20:51:48.961072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.534 [2024-07-15 20:51:48.961080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.534 [2024-07-15 20:51:48.972701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.534 [2024-07-15 20:51:48.972721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.534 [2024-07-15 20:51:48.972730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.534 [2024-07-15 20:51:48.983130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.534 [2024-07-15 20:51:48.983152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.534 [2024-07-15 20:51:48.983160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.534 [2024-07-15 20:51:48.993545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.534 [2024-07-15 20:51:48.993565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.534 [2024-07-15 20:51:48.993573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.534 [2024-07-15 20:51:49.002811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.534 [2024-07-15 20:51:49.002831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.534 [2024-07-15 20:51:49.002838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.534 [2024-07-15 20:51:49.012029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.534 [2024-07-15 20:51:49.012052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.534 [2024-07-15 20:51:49.012060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.022298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.022320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.022329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.030950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.030970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.030983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.040426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.040445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.040454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.049322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.049341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.049350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.059261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.059281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.059289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.068205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.068230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.068239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.078246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.078267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.078276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.086450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.086470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.086478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.096336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.096357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.096365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.105649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.105669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.105677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.115147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.115169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.115177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.125215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.125242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.125252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.793 [2024-07-15 20:51:49.135827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.793 [2024-07-15 20:51:49.135848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.793 [2024-07-15 20:51:49.135857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.143799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.143820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.143829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.154770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.154789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.154798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.163299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.163319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.163327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.173722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.173742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.173750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.182583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.182603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.182612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.193520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.193540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.193552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.203734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.203753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.203761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.212746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.212765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.212772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.221156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.221176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.221184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.231965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.231984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.231993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.240521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.240541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.240549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.250439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.250458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.250467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.260423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.260442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.260450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.794 [2024-07-15 20:51:49.271434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:14.794 [2024-07-15 20:51:49.271454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.794 [2024-07-15 20:51:49.271462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.280282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.280308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.280316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.290378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.290398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.290406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.299833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.299853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.299861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.308951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.308971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.308979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.317891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.317912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.317920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.327566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.327586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.327594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.337282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.337305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.337313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.346993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.347013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.347021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.356157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.356176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.356185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.365665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.365684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.365692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.376746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.376766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.376774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.384733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.384752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.384761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.395012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.395032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.395040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.405039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.405059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.405067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.414480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.053 [2024-07-15 20:51:49.414501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.053 [2024-07-15 20:51:49.414509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.053 [2024-07-15 20:51:49.424716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.424736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.424744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.433191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.433210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.433218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.443769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.443788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.443800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.452329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.452349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.452356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.462755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.462775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.462783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.471941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.471960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.471968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.480220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.480246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.480254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.490041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.490061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.490068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.499146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.499166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.499174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.508808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.508829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.508837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.517704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.517724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.517732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.054 [2024-07-15 20:51:49.528421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.054 [2024-07-15 20:51:49.528445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.054 [2024-07-15 20:51:49.528453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.537620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.537640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.537649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.547050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.547070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.547079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.556354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.556373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.556381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.565177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.565197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.565205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.575065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.575085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.575093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.584196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.584215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.584229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.594100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.594119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.594127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.604815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.604835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.604847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.613666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.613685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.613693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.623200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.623220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.623234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.632107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.632126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.632135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.641714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.641733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.641741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.651786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.651805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.651813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.660873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.660892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.660900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.670620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.670639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.670648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.678746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.678765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.678773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.689334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.689356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.689365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.698135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.698154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.698162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.707480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.707499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.707507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.717746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.717766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.717774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.725813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.725832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.735524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.735543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.735551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.745357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.745376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.745384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.753871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.753889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.753897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.764752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.764771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.764779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.772992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.313 [2024-07-15 20:51:49.773011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.313 [2024-07-15 20:51:49.773019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.313 [2024-07-15 20:51:49.783408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.314 [2024-07-15 20:51:49.783428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.314 [2024-07-15 20:51:49.783435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.314 [2024-07-15 20:51:49.792114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.314 [2024-07-15 20:51:49.792132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.314 [2024-07-15 20:51:49.792141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.802893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.802912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.802921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.810649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.810668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.810677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.821181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.821200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.821209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.830121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.830140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.830148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.840223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.840246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.840254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.849394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.849413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.849425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.858174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.858193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.858202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.869391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.869410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.869418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.877267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.877286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.877294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.887319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.887339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.573 [2024-07-15 20:51:49.887347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.573 [2024-07-15 20:51:49.897082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.573 [2024-07-15 20:51:49.897104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.897113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.905597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.905617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.905626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.915583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.915603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.915610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.925791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.925811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.925820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.933797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.933820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.933828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.944564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.944583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.944591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.953770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.953790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.953798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.962697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.962717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.962726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.972895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.972915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.972923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.981155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.981174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.981182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:49.991662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:49.991682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:49.991690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:50.003897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:50.003918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:50.003927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:50.012237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:50.012256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:50.012274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:50.023556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:50.023578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:50.023587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:50.032772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:50.032792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:50.032801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:50.044456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:50.044477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:50.044485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.574 [2024-07-15 20:51:50.053588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.574 [2024-07-15 20:51:50.053607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.574 [2024-07-15 20:51:50.053615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.062449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.062469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.062478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.074082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.074105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.074115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.082913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.082933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.082943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.093231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.093251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.093259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.104261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.104285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.104293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.112871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.112890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.112899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.124687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.124708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.124717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.132989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.133008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.133016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.143916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.143936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.143944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.153331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.153351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.153359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.162641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.833 [2024-07-15 20:51:50.162659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.833 [2024-07-15 20:51:50.162667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.833 [2024-07-15 20:51:50.173055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.173074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.173082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.181844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.181864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.181873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.192161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.192181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.192189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.200925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.200944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.200953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.211088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.211108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.211116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.220353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.220373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.220381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.230240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.230261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.230269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.239164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.239183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.239191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.249122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.249142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.249151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.259662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.259683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.259692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.268908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.268927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.268939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.278365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.278384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.278393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.288332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.288351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.288360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.297610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.297630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.297639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.834 [2024-07-15 20:51:50.307719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:15.834 [2024-07-15 20:51:50.307738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.834 [2024-07-15 20:51:50.307747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.319319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.319340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.319349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.328402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.328423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.328432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.338875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.338895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.338904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.349973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.349993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.350002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.360248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.360273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.360281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.369666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.369685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.369693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.378176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.378195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.378204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.387532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.387551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.387560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.396953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.396971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.396980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.406742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.406761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.406770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.415725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.415744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.415753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.427201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.427220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.427233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.436131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.436150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.436158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.446562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.446582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.446591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.455307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.455326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.455334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.465032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.465051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.465059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.474098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.474117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.474125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.484584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.484603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.484611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.493961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.094 [2024-07-15 20:51:50.493980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.094 [2024-07-15 20:51:50.493989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.094 [2024-07-15 20:51:50.502106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.095 [2024-07-15 20:51:50.502124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.095 [2024-07-15 20:51:50.502132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.095 [2024-07-15 20:51:50.512900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.095 [2024-07-15 20:51:50.512919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.095 [2024-07-15 20:51:50.512928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.095 [2024-07-15 20:51:50.522368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.095 [2024-07-15 20:51:50.522391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.095 [2024-07-15 20:51:50.522400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.095 [2024-07-15 20:51:50.530609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.095 [2024-07-15 20:51:50.530628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.095 [2024-07-15 20:51:50.530637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.095 [2024-07-15 20:51:50.543281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.095 [2024-07-15 20:51:50.543303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.095 [2024-07-15 20:51:50.543311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.095 [2024-07-15 20:51:50.550979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.095 [2024-07-15 20:51:50.551000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.095 [2024-07-15 20:51:50.551009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.095 [2024-07-15 20:51:50.562257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.095 [2024-07-15 20:51:50.562277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.095 [2024-07-15 20:51:50.562285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.095 [2024-07-15 20:51:50.571210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.095 [2024-07-15 20:51:50.571237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.095 [2024-07-15 20:51:50.571246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.354 [2024-07-15 20:51:50.581672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.354 [2024-07-15 20:51:50.581692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.354 [2024-07-15 20:51:50.581700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.354 [2024-07-15 20:51:50.589870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.354 [2024-07-15 20:51:50.589889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.354 [2024-07-15 20:51:50.589897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.354 [2024-07-15 20:51:50.600100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.354 [2024-07-15 20:51:50.600119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.354 [2024-07-15 20:51:50.600128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.354 [2024-07-15 20:51:50.609515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.354 [2024-07-15 20:51:50.609534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.354 [2024-07-15 20:51:50.609542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.354 [2024-07-15 20:51:50.617414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.354 [2024-07-15 20:51:50.617433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.354 [2024-07-15 20:51:50.617440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.354 [2024-07-15 20:51:50.626546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1353f20) 00:26:16.354 [2024-07-15 20:51:50.626565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.354 [2024-07-15 20:51:50.626573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.354 00:26:16.354 Latency(us) 00:26:16.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.354 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.354 nvme0n1 : 2.04 25970.11 101.45 0.00 0.00 4826.10 2208.28 44906.41 00:26:16.354 =================================================================================================================== 00:26:16.354 Total : 25970.11 101.45 0.00 0.00 4826.10 2208.28 44906.41 00:26:16.354 0 00:26:16.354 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:16.354 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:16.354 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:16.354 | .driver_specific 00:26:16.354 | .nvme_error 00:26:16.354 | .status_code 00:26:16.354 | .command_transient_transport_error' 00:26:16.354 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2833748 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2833748 ']' 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2833748 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833748 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833748' 00:26:16.613 killing process with pid 2833748 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2833748 00:26:16.613 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.613 00:26:16.613 Latency(us) 00:26:16.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.613 =================================================================================================================== 00:26:16.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.613 20:51:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2833748 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2834442 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2834442 /var/tmp/bperf.sock 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2834442 ']' 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.613 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.872 [2024-07-15 20:51:51.116416] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:16.872 [2024-07-15 20:51:51.116460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834442 ] 00:26:16.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.872 Zero copy mechanism will not be used. 00:26:16.872 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.872 [2024-07-15 20:51:51.168901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.872 [2024-07-15 20:51:51.240599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.439 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.439 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:17.439 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.439 20:51:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.699 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.699 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.699 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.699 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.699 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.699 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.992 nvme0n1 00:26:17.992 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:17.992 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.992 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:18.252 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.252 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:18.252 20:51:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.252 Zero copy mechanism will not be used. 00:26:18.252 Running I/O for 2 seconds... 00:26:18.252 [2024-07-15 20:51:52.578409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.578441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.578451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.588097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.588121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.588130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.596840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.596861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.596870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.604745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.604765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.604774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.611643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.611662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.611671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.618446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.618466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.618474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.624999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.625020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.625033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.631401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.631424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.631431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.637720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.637740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.637748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.643842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.643863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.643871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.650015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.650035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.650043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.656024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.656043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.656051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.662055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.662076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.662084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.668043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.668064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.668072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.673453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.673474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.673482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.679198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.679218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.679233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.685522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.685544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.685553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.691279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.691299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.252 [2024-07-15 20:51:52.691307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.252 [2024-07-15 20:51:52.699554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.252 [2024-07-15 20:51:52.699575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.253 [2024-07-15 20:51:52.699583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.253 [2024-07-15 20:51:52.709220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.253 [2024-07-15 20:51:52.709247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.253 [2024-07-15 20:51:52.709255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.253 [2024-07-15 20:51:52.718010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.253 [2024-07-15 20:51:52.718031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.253 [2024-07-15 20:51:52.718039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.253 [2024-07-15 20:51:52.726349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.253 [2024-07-15 20:51:52.726369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.253 [2024-07-15 20:51:52.726377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.253 [2024-07-15 20:51:52.733546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.253 [2024-07-15 20:51:52.733566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.253 [2024-07-15 20:51:52.733574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.740126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.740152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.740164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.747690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.747711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.747719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.754197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.754218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.754232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.761745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.761766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.761775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.769852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.769874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.769883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.777441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.777463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.777471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.784734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.784755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.784763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.792876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.792897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.792905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.803089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.803110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.803119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.813766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.813792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.813800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.823695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.823716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.823724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.833528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.833550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.833558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.842477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.842499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.842507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.851495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.851517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.851526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.860524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.860545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.860554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.871260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.871281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.871289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.880353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.880374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.880382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.889467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.889489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.889498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.898524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.898545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.898554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.907043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.907064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.907073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.917868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.917888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.917897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.927087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.927108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.927116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.937532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.937554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.937562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.946877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.946898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.946906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.957118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.957138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.957146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.513 [2024-07-15 20:51:52.966435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.513 [2024-07-15 20:51:52.966456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.513 [2024-07-15 20:51:52.966465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.514 [2024-07-15 20:51:52.975983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.514 [2024-07-15 20:51:52.976004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.514 [2024-07-15 20:51:52.976016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.514 [2024-07-15 20:51:52.986247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.514 [2024-07-15 20:51:52.986268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.514 [2024-07-15 20:51:52.986277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.773 [2024-07-15 20:51:52.997098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.773 [2024-07-15 20:51:52.997121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.773 [2024-07-15 20:51:52.997130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.773 [2024-07-15 20:51:53.008363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.773 [2024-07-15 20:51:53.008387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.773 [2024-07-15 20:51:53.008396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.773 [2024-07-15 20:51:53.019349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.773 [2024-07-15 20:51:53.019372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.773 [2024-07-15 20:51:53.019380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.028980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.029003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.029011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.039015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.039038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.039046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.050001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.050024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.050033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.058782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.058802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.058810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.067259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.067283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.067291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.075330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.075350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.075359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.082592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.082613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.082620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.089922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.089943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.089951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.096910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.096931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.096938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.102957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.102977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.102986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.109312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.109331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.109339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.119768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.119790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.119798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.128833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.128853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.128861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.137645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.137667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.137675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.145371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.145391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.145399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.153739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.153758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.153767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.162808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.162829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.162837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.170849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.170870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.170878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.178186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.178206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.185667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.185688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.185695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.192777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.192797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.192805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.202466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.202487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.202498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.211269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.211290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.211298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.219103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.219123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.219131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.226443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.226464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.226472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.235930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.235950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.235958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.245293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.245313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.245321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.774 [2024-07-15 20:51:53.254340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:18.774 [2024-07-15 20:51:53.254360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.774 [2024-07-15 20:51:53.254367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.262871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.262892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.262901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.272379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.272400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.272408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.280647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.280667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.280675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.288733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.288753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.288762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.296390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.296410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.296418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.303978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.303999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.304007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.311484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.311505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.311514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.317943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.317963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.317971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.326164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.326185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.326193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.335551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.335571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.335580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.344073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.344093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.344104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.351762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.351782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.351790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.361881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.361902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.361909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.371881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.371902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.371911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.381453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.381474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.381482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.389720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.389742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.389750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.398220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.398246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.398254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.406466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.406488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.406497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.414589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.414611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.414620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.421932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.421961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.421969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.429098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.429121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.429129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.436066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.436087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.436095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.442659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.442682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.442690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.449056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.449077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.449086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.455356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.455378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.455397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.461372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.461394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.461402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.467219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.467246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.467254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.472952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.472972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.472979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.478810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.478831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.478838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.484674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.484695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.484703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.490399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.490419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.490427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.496111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.496131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.496139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.501871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.501891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.501899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.507626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.507647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.507655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.035 [2024-07-15 20:51:53.513320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.035 [2024-07-15 20:51:53.513341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.035 [2024-07-15 20:51:53.513349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.519069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.519091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.519099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.524859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.524879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.524891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.530450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.530471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.530479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.535994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.536015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.536023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.541611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.541632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.541640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.547275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.547296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.547304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.553007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.553028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.553035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.558796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.558817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.558825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.564331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.564351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.564359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.569995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.570015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.570023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.575723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.575747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.575755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.581212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.581238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.581246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.584261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.584281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.584289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.589857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.589876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.589884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.595359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.595379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.595386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.600970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.600991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.600999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.606512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.606532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.606541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.611969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.611989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.611998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.617535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.295 [2024-07-15 20:51:53.617555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.295 [2024-07-15 20:51:53.617564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.295 [2024-07-15 20:51:53.623052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.623073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.623081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.628634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.628656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.628664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.634264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.634284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.634292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.639864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.639884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.639892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.645433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.645454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.645461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.651200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.651221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.651234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.657049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.657068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.657076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.662554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.662574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.662582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.668247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.668271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.668279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.674023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.674044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.674052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.679815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.679834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.679843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.685385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.685405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.685413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.691010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.691031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.691039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.696758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.696779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.696787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.702430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.702451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.702459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.708101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.708122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.708131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.713819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.713841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.713849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.719573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.719594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.719602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.725418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.725439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.725447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.731163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.731184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.731192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.736838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.736858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.736866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.742753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.742774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.742782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.748626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.748646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.748654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.754387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.754406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.754414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.760090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.760110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.760119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.766138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.766158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.766170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.296 [2024-07-15 20:51:53.771871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.296 [2024-07-15 20:51:53.771892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.296 [2024-07-15 20:51:53.771901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.777756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.777777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.777786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.783512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.783532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.783540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.789404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.789435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.789443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.795132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.795153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.795161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.800916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.800936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.800944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.806940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.806961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.806970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.812783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.812804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.812813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.818700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.818724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.818731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.824354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.824374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.824381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.830061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.830081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.830089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.835796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.835816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.835824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.841638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.841658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.841666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.847314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.847334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.847341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.853205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.853231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.853240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.858995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.859014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.859023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.864944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.864963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.864971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.870650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.870670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.870678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.876596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.876616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.876624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.882402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.882423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.882431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.888086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.888106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.888114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.893843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.893864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.893871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.899629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.899649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.899657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.905237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.905257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.905265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.912032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.912051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.558 [2024-07-15 20:51:53.912059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.558 [2024-07-15 20:51:53.917718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.558 [2024-07-15 20:51:53.917738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.917750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.923434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.923454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.923462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.928852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.928872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.928880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.934514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.934535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.934543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.940195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.940214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.940222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.945701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.945722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.945730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.951327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.951348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.951356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.957019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.957038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.957046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.962561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.962581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.962589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.967897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.967921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.967929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.973290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.973311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.973319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.978599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.978619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.978627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.983956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.983976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.983984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.989381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.989401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.989409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:53.994864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:53.994884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:53.994892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:54.000372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:54.000392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:54.000400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:54.005908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:54.005928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:54.005936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:54.011456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:54.011476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:54.011484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:54.016910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:54.016930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:54.016938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:54.022335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:54.022355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:54.022363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:54.027862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:54.027883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:54.027891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.559 [2024-07-15 20:51:54.033529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.559 [2024-07-15 20:51:54.033552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.559 [2024-07-15 20:51:54.033560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.819 [2024-07-15 20:51:54.039261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.819 [2024-07-15 20:51:54.039284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.819 [2024-07-15 20:51:54.039293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.819 [2024-07-15 20:51:54.044880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.819 [2024-07-15 20:51:54.044901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.819 [2024-07-15 20:51:54.044910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.819 [2024-07-15 20:51:54.050405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.819 [2024-07-15 20:51:54.050436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.819 [2024-07-15 20:51:54.050445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.819 [2024-07-15 20:51:54.056036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.819 [2024-07-15 20:51:54.056056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.819 [2024-07-15 20:51:54.056064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.819 [2024-07-15 20:51:54.061628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.819 [2024-07-15 20:51:54.061648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.061659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.067157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.067177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.067185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.072748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.072767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.072774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.078307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.078327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.078334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.083882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.083902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.083910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.089540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.089560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.089568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.095070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.095090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.095097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.100541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.100562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.100570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.106054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.106074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.106082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.111689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.111710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.111718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.117468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.117488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.117497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.123261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.123281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.123289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.128858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.128879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.128887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.134437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.134456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.134464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.140151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.140171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.140179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.145819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.145839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.145847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.151396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.151415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.151423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.156968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.156988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.157002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.162604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.162625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.162633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.168236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.168256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.168264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.173903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.173923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.173930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.179427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.179448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.179456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.184949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.184969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.184977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.190617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.190638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.190646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.196301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.196321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.196329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.201840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.201860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.201868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.207436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.207460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.207468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.212974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.212995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.213003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.218637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.218658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.820 [2024-07-15 20:51:54.218666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.820 [2024-07-15 20:51:54.224326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.820 [2024-07-15 20:51:54.224346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.224354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.229900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.229920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.229928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.235459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.235480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.235488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.241102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.241123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.241130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.246747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.246768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.246776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.252360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.252380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.252389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.258018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.258038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.258046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.263741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.263762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.263770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.269436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.269457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.269465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.275123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.275143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.275151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.280785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.280806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.280814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.286525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.286546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.286553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.292244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.292265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.292272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.821 [2024-07-15 20:51:54.297871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:19.821 [2024-07-15 20:51:54.297891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.821 [2024-07-15 20:51:54.297899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.303468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.303489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.303501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.309183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.309203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.309211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.314846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.314866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.314874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.320464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.320484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.320492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.326121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.326141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.326149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.331903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.331924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.331932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.337735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.337755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.337763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.343489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.343509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.343517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.349200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.349221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.349236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.354995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.355019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.355027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.360724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.360744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.360752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.366550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.366570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.366578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.372345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.372365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.372373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.378182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.378202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.378210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.384191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.081 [2024-07-15 20:51:54.384211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.081 [2024-07-15 20:51:54.384219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.081 [2024-07-15 20:51:54.390102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.390123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.390131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.395834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.395855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.395863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.401504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.401524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.401532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.407185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.407205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.407214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.412798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.412818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.412826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.418483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.418503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.418511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.424204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.424230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.424239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.429833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.429853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.429861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.435158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.435178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.435186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.440857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.440877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.440885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.446486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.446506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.446514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.451974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.451997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.452005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.457422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.457442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.457450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.462872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.462891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.462899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.468435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.468455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.468464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.474113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.474133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.474141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.479797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.479818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.479826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.485379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.485399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.485408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.491014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.491036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.491044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.496617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.496638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.496646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.502443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.502464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.502473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.508210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.508236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.508245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.513776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.513797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.082 [2024-07-15 20:51:54.513805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.082 [2024-07-15 20:51:54.519406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.082 [2024-07-15 20:51:54.519427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.083 [2024-07-15 20:51:54.519436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.083 [2024-07-15 20:51:54.525178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.083 [2024-07-15 20:51:54.525198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.083 [2024-07-15 20:51:54.525207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.083 [2024-07-15 20:51:54.530892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.083 [2024-07-15 20:51:54.530912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.083 [2024-07-15 20:51:54.530920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.083 [2024-07-15 20:51:54.536417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.083 [2024-07-15 20:51:54.536437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.083 [2024-07-15 20:51:54.536445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.083 [2024-07-15 20:51:54.542016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.083 [2024-07-15 20:51:54.542036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.083 [2024-07-15 20:51:54.542045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.083 [2024-07-15 20:51:54.547736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.083 [2024-07-15 20:51:54.547755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.083 [2024-07-15 20:51:54.547767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.083 [2024-07-15 20:51:54.553653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.083 [2024-07-15 20:51:54.553674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.083 [2024-07-15 20:51:54.553683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.083 [2024-07-15 20:51:54.559311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.083 [2024-07-15 20:51:54.559331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.083 [2024-07-15 20:51:54.559339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.342 [2024-07-15 20:51:54.564948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.342 [2024-07-15 20:51:54.564969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.342 [2024-07-15 20:51:54.564978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.342 [2024-07-15 20:51:54.570550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dde0b0) 00:26:20.342 [2024-07-15 20:51:54.570571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.342 [2024-07-15 20:51:54.570580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.342 00:26:20.342 Latency(us) 00:26:20.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.342 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:20.342 nvme0n1 : 2.00 4709.43 588.68 0.00 0.00 3394.32 698.10 11226.60 00:26:20.342 =================================================================================================================== 00:26:20.342 Total : 4709.43 588.68 0.00 0.00 3394.32 698.10 11226.60 00:26:20.342 0 00:26:20.342 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:20.342 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:20.342 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:20.342 | .driver_specific 00:26:20.342 | .nvme_error 00:26:20.342 | .status_code 00:26:20.342 | .command_transient_transport_error' 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 304 > 0 )) 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2834442 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2834442 ']' 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2834442 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834442 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834442' 00:26:20.343 killing process with pid 2834442 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2834442 00:26:20.343 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.343 00:26:20.343 Latency(us) 00:26:20.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.343 =================================================================================================================== 00:26:20.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.343 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2834442 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2835007 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2835007 /var/tmp/bperf.sock 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2835007 ']' 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:20.602 20:51:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.602 [2024-07-15 20:51:55.037518] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:20.602 [2024-07-15 20:51:55.037566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835007 ] 00:26:20.602 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.861 [2024-07-15 20:51:55.092498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.861 [2024-07-15 20:51:55.172035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.429 20:51:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.429 20:51:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:21.429 20:51:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:21.429 20:51:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:21.689 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:21.689 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.689 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.689 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.689 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.689 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.948 nvme0n1 00:26:21.948 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:21.948 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.948 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.948 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.948 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:21.948 20:51:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.948 Running I/O for 2 seconds... 00:26:21.948 [2024-07-15 20:51:56.392247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8a50 00:26:21.948 [2024-07-15 20:51:56.393155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.948 [2024-07-15 20:51:56.393183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:21.948 [2024-07-15 20:51:56.402601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ddc00 00:26:21.948 [2024-07-15 20:51:56.403605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.948 [2024-07-15 20:51:56.403626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.948 [2024-07-15 20:51:56.411830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190dece0 00:26:21.948 [2024-07-15 20:51:56.412827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.948 [2024-07-15 20:51:56.412847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.948 [2024-07-15 20:51:56.420270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e49b0 00:26:21.948 [2024-07-15 20:51:56.421697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.948 [2024-07-15 20:51:56.421716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.948 [2024-07-15 20:51:56.428381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fcdd0 00:26:21.948 [2024-07-15 20:51:56.429048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.948 [2024-07-15 20:51:56.429066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.438047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fb8b8 00:26:22.207 [2024-07-15 20:51:56.438863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.438886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.447716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e0630 00:26:22.207 [2024-07-15 20:51:56.448569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.448589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.457358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190de8a8 00:26:22.207 [2024-07-15 20:51:56.458332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.458350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.467002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f1ca0 00:26:22.207 [2024-07-15 20:51:56.468186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.468205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.476608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fb8b8 00:26:22.207 [2024-07-15 20:51:56.477910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.477928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.486217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e1f80 00:26:22.207 [2024-07-15 20:51:56.487557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.487575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.495835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e01f8 00:26:22.207 [2024-07-15 20:51:56.497299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.497317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.502331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fc560 00:26:22.207 [2024-07-15 20:51:56.502942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.502960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.511907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e3060 00:26:22.207 [2024-07-15 20:51:56.512706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.512724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.520608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed920 00:26:22.207 [2024-07-15 20:51:56.521406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.521423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.530790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed0b0 00:26:22.207 [2024-07-15 20:51:56.531665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.531683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.539988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f31b8 00:26:22.207 [2024-07-15 20:51:56.540848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.540866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.549147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f20d8 00:26:22.207 [2024-07-15 20:51:56.550006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.550025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.558302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f0ff8 00:26:22.207 [2024-07-15 20:51:56.559157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.207 [2024-07-15 20:51:56.559175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.207 [2024-07-15 20:51:56.567495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fe720 00:26:22.208 [2024-07-15 20:51:56.568364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.568381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.576659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f4b08 00:26:22.208 [2024-07-15 20:51:56.577520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.577537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.585859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190de038 00:26:22.208 [2024-07-15 20:51:56.586721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.586739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.595078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e6fa8 00:26:22.208 [2024-07-15 20:51:56.595942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.595960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.604247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e8088 00:26:22.208 [2024-07-15 20:51:56.605105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.605123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.613394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f9b30 00:26:22.208 [2024-07-15 20:51:56.614248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.614266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.622557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8a50 00:26:22.208 [2024-07-15 20:51:56.623423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.623440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.631695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7970 00:26:22.208 [2024-07-15 20:51:56.632561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.632579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.640877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed920 00:26:22.208 [2024-07-15 20:51:56.641740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.641758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.650015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fcdd0 00:26:22.208 [2024-07-15 20:51:56.650909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.650926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.659597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eee38 00:26:22.208 [2024-07-15 20:51:56.660319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.660337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.668112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8618 00:26:22.208 [2024-07-15 20:51:56.669565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.669584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.676124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f20d8 00:26:22.208 [2024-07-15 20:51:56.676813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.676835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:22.208 [2024-07-15 20:51:56.685768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e9e10 00:26:22.208 [2024-07-15 20:51:56.686550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.208 [2024-07-15 20:51:56.686568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.695380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f46d0 00:26:22.468 [2024-07-15 20:51:56.696300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.696318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.704944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fda78 00:26:22.468 [2024-07-15 20:51:56.705997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.706015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.715883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f0ff8 00:26:22.468 [2024-07-15 20:51:56.717414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.717432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.722334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f20d8 00:26:22.468 [2024-07-15 20:51:56.723016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.723032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.733230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed0b0 00:26:22.468 [2024-07-15 20:51:56.734391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.734408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.742364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e2c28 00:26:22.468 [2024-07-15 20:51:56.743175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.743193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.751763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e6738 00:26:22.468 [2024-07-15 20:51:56.752823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.752840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.760940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190efae0 00:26:22.468 [2024-07-15 20:51:56.762016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.762033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.770214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eaab8 00:26:22.468 [2024-07-15 20:51:56.771277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.771295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.779380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e3498 00:26:22.468 [2024-07-15 20:51:56.780441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.780458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.788556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e9e10 00:26:22.468 [2024-07-15 20:51:56.789614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.789631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.797702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e8d30 00:26:22.468 [2024-07-15 20:51:56.798764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.798781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.806856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190df550 00:26:22.468 [2024-07-15 20:51:56.807918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.807935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.816026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e1f80 00:26:22.468 [2024-07-15 20:51:56.817086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.817103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.825412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eea00 00:26:22.468 [2024-07-15 20:51:56.826592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.826609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.833132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190de8a8 00:26:22.468 [2024-07-15 20:51:56.833699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.833716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.842741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e3060 00:26:22.468 [2024-07-15 20:51:56.843426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.843444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.850959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e0630 00:26:22.468 [2024-07-15 20:51:56.851868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.851885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.860548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e7818 00:26:22.468 [2024-07-15 20:51:56.861591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.861608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.870360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fe720 00:26:22.468 [2024-07-15 20:51:56.871497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.871514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.881276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e27f0 00:26:22.468 [2024-07-15 20:51:56.882885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.882902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.887767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7538 00:26:22.468 [2024-07-15 20:51:56.888450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.468 [2024-07-15 20:51:56.888467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:22.468 [2024-07-15 20:51:56.897336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e01f8 00:26:22.468 [2024-07-15 20:51:56.898223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.469 [2024-07-15 20:51:56.898243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:22.469 [2024-07-15 20:51:56.906000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fcdd0 00:26:22.469 [2024-07-15 20:51:56.906915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.469 [2024-07-15 20:51:56.906932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:22.469 [2024-07-15 20:51:56.917114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f4298 00:26:22.469 [2024-07-15 20:51:56.918484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.469 [2024-07-15 20:51:56.918503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:22.469 [2024-07-15 20:51:56.926737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed920 00:26:22.469 [2024-07-15 20:51:56.928221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.469 [2024-07-15 20:51:56.928241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.469 [2024-07-15 20:51:56.935265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190de038 00:26:22.469 [2024-07-15 20:51:56.936291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.469 [2024-07-15 20:51:56.936309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.469 [2024-07-15 20:51:56.944346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e5658 00:26:22.469 [2024-07-15 20:51:56.945375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.469 [2024-07-15 20:51:56.945392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:56.953499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f2d80 00:26:22.729 [2024-07-15 20:51:56.954529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:56.954546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:56.962662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e6300 00:26:22.729 [2024-07-15 20:51:56.963687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:56.963704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:56.971842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ef6a8 00:26:22.729 [2024-07-15 20:51:56.972867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:56.972884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:56.980062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e88f8 00:26:22.729 [2024-07-15 20:51:56.981545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:56.981563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:56.988694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f0ff8 00:26:22.729 [2024-07-15 20:51:56.989477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:56.989495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:56.997842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e99d8 00:26:22.729 [2024-07-15 20:51:56.998602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:56.998627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.006962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f6cc8 00:26:22.729 [2024-07-15 20:51:57.007737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.007755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.016111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8e88 00:26:22.729 [2024-07-15 20:51:57.016868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.016886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.025242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7da8 00:26:22.729 [2024-07-15 20:51:57.026013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.026030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.034340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fd208 00:26:22.729 [2024-07-15 20:51:57.035093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.035110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.043497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ff3c8 00:26:22.729 [2024-07-15 20:51:57.044250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.044268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.052615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eee38 00:26:22.729 [2024-07-15 20:51:57.053373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.053391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.061631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e38d0 00:26:22.729 [2024-07-15 20:51:57.062401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.062417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.070748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ebfd0 00:26:22.729 [2024-07-15 20:51:57.071531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.071548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.079859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fc560 00:26:22.729 [2024-07-15 20:51:57.080638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.080656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.088970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e0a68 00:26:22.729 [2024-07-15 20:51:57.089761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.089779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.098108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e3060 00:26:22.729 [2024-07-15 20:51:57.098903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.729 [2024-07-15 20:51:57.098920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.729 [2024-07-15 20:51:57.107143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f46d0 00:26:22.729 [2024-07-15 20:51:57.107917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.107935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.116201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f57b0 00:26:22.730 [2024-07-15 20:51:57.116974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.116991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.125309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ee190 00:26:22.730 [2024-07-15 20:51:57.126086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.126103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.134422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fd640 00:26:22.730 [2024-07-15 20:51:57.135175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.135192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.143554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e95a0 00:26:22.730 [2024-07-15 20:51:57.144305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.144323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.152679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e1710 00:26:22.730 [2024-07-15 20:51:57.153459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.161826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f6890 00:26:22.730 [2024-07-15 20:51:57.162640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.162657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.171162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8a50 00:26:22.730 [2024-07-15 20:51:57.171931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.171948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.180330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ec840 00:26:22.730 [2024-07-15 20:51:57.181090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.181107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.189552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed920 00:26:22.730 [2024-07-15 20:51:57.190309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.190326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.198690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e12d8 00:26:22.730 [2024-07-15 20:51:57.199450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.199468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.730 [2024-07-15 20:51:57.207691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e88f8 00:26:22.730 [2024-07-15 20:51:57.208470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.730 [2024-07-15 20:51:57.208487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.991 [2024-07-15 20:51:57.216922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e4578 00:26:22.991 [2024-07-15 20:51:57.217705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.991 [2024-07-15 20:51:57.217723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.991 [2024-07-15 20:51:57.226074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e7c50 00:26:22.991 [2024-07-15 20:51:57.226832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.991 [2024-07-15 20:51:57.226849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.991 [2024-07-15 20:51:57.235200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fc998 00:26:22.991 [2024-07-15 20:51:57.235970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.991 [2024-07-15 20:51:57.235990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.991 [2024-07-15 20:51:57.244402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e2c28 00:26:22.991 [2024-07-15 20:51:57.245164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.991 [2024-07-15 20:51:57.245181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.991 [2024-07-15 20:51:57.253558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f35f0 00:26:22.991 [2024-07-15 20:51:57.254317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.991 [2024-07-15 20:51:57.254334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.991 [2024-07-15 20:51:57.262733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f4b08 00:26:22.991 [2024-07-15 20:51:57.263512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.263529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.271874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f20d8 00:26:22.992 [2024-07-15 20:51:57.272652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.272669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.280908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7538 00:26:22.992 [2024-07-15 20:51:57.281708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.281725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.290093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f0ff8 00:26:22.992 [2024-07-15 20:51:57.290875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.290891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.299185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e99d8 00:26:22.992 [2024-07-15 20:51:57.299960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.299979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.308335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f6cc8 00:26:22.992 [2024-07-15 20:51:57.309086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.309104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.317404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8e88 00:26:22.992 [2024-07-15 20:51:57.318149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.318166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.326557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7da8 00:26:22.992 [2024-07-15 20:51:57.327294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.327311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.335719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fd208 00:26:22.992 [2024-07-15 20:51:57.336479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.336496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.344868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ff3c8 00:26:22.992 [2024-07-15 20:51:57.345630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.345647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.354003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eee38 00:26:22.992 [2024-07-15 20:51:57.354746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.354763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.363176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e38d0 00:26:22.992 [2024-07-15 20:51:57.363920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.363937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.372246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ebfd0 00:26:22.992 [2024-07-15 20:51:57.372983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.373000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.381504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fc560 00:26:22.992 [2024-07-15 20:51:57.382242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.382259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.390665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e0a68 00:26:22.992 [2024-07-15 20:51:57.391426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.391443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.399783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e3060 00:26:22.992 [2024-07-15 20:51:57.400554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.400572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.408937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f46d0 00:26:22.992 [2024-07-15 20:51:57.409707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.409725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.418093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f57b0 00:26:22.992 [2024-07-15 20:51:57.418889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.418906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.427524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ee190 00:26:22.992 [2024-07-15 20:51:57.428317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.428334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.436814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fd640 00:26:22.992 [2024-07-15 20:51:57.437505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.437522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.446008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e95a0 00:26:22.992 [2024-07-15 20:51:57.446751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.446769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.455115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e1710 00:26:22.992 [2024-07-15 20:51:57.455859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.455876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.992 [2024-07-15 20:51:57.464286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f6890 00:26:22.992 [2024-07-15 20:51:57.465027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.992 [2024-07-15 20:51:57.465045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.252 [2024-07-15 20:51:57.473456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8a50 00:26:23.253 [2024-07-15 20:51:57.474213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.474237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.482592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ec840 00:26:23.253 [2024-07-15 20:51:57.483363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.483381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.491924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed920 00:26:23.253 [2024-07-15 20:51:57.492705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.492722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.501313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e12d8 00:26:23.253 [2024-07-15 20:51:57.502103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.502121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.510540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e88f8 00:26:23.253 [2024-07-15 20:51:57.511296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.511314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.519706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e4578 00:26:23.253 [2024-07-15 20:51:57.520475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.520493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.528843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e7c50 00:26:23.253 [2024-07-15 20:51:57.529606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.529623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.537972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fc998 00:26:23.253 [2024-07-15 20:51:57.538716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.538733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.547055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e2c28 00:26:23.253 [2024-07-15 20:51:57.547817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.547835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.556172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f35f0 00:26:23.253 [2024-07-15 20:51:57.556920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.556937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.565359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f4b08 00:26:23.253 [2024-07-15 20:51:57.566111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.566128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.574464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f20d8 00:26:23.253 [2024-07-15 20:51:57.575221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.575241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.583504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7538 00:26:23.253 [2024-07-15 20:51:57.584241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.584258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.592654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f0ff8 00:26:23.253 [2024-07-15 20:51:57.593405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.593422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.601754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e99d8 00:26:23.253 [2024-07-15 20:51:57.602521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.602538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.610791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f6cc8 00:26:23.253 [2024-07-15 20:51:57.611571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.611588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.619922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8e88 00:26:23.253 [2024-07-15 20:51:57.620672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.620689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.629036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7da8 00:26:23.253 [2024-07-15 20:51:57.629803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.638151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fd208 00:26:23.253 [2024-07-15 20:51:57.638907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.638924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.647315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ff3c8 00:26:23.253 [2024-07-15 20:51:57.648054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.648072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.656460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eee38 00:26:23.253 [2024-07-15 20:51:57.657216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.657237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.665639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e38d0 00:26:23.253 [2024-07-15 20:51:57.666325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.253 [2024-07-15 20:51:57.666342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.253 [2024-07-15 20:51:57.674778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ebfd0 00:26:23.253 [2024-07-15 20:51:57.675579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.254 [2024-07-15 20:51:57.675597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.254 [2024-07-15 20:51:57.684133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fc560 00:26:23.254 [2024-07-15 20:51:57.684927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.254 [2024-07-15 20:51:57.684945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.254 [2024-07-15 20:51:57.693362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e0a68 00:26:23.254 [2024-07-15 20:51:57.694124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.254 [2024-07-15 20:51:57.694141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.254 [2024-07-15 20:51:57.702504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e3060 00:26:23.254 [2024-07-15 20:51:57.703273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.254 [2024-07-15 20:51:57.703292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.254 [2024-07-15 20:51:57.711641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f46d0 00:26:23.254 [2024-07-15 20:51:57.712423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.254 [2024-07-15 20:51:57.712444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.254 [2024-07-15 20:51:57.720765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f57b0 00:26:23.254 [2024-07-15 20:51:57.721529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.254 [2024-07-15 20:51:57.721547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.254 [2024-07-15 20:51:57.729905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ee190 00:26:23.254 [2024-07-15 20:51:57.730653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.254 [2024-07-15 20:51:57.730671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.739208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fd640 00:26:23.514 [2024-07-15 20:51:57.740012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.740030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.748516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e95a0 00:26:23.514 [2024-07-15 20:51:57.749282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.749301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.757691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e1710 00:26:23.514 [2024-07-15 20:51:57.758460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.758478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.766884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f6890 00:26:23.514 [2024-07-15 20:51:57.767567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.767585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.776040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8a50 00:26:23.514 [2024-07-15 20:51:57.776717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.776735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.785185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ec840 00:26:23.514 [2024-07-15 20:51:57.785996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.786014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.794561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed920 00:26:23.514 [2024-07-15 20:51:57.795324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.795346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.803718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e12d8 00:26:23.514 [2024-07-15 20:51:57.804400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.804418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.812894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e88f8 00:26:23.514 [2024-07-15 20:51:57.813574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.813591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.822058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e4578 00:26:23.514 [2024-07-15 20:51:57.822732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.822750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.831192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e7c50 00:26:23.514 [2024-07-15 20:51:57.831905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.831924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.840533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fc998 00:26:23.514 [2024-07-15 20:51:57.841203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:51:57.841221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:51:57.849763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e2c28 00:26:23.514 [2024-07-15 20:51:57.850445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.850463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.858921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f35f0 00:26:23.515 [2024-07-15 20:51:57.859601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.859619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.868283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f4b08 00:26:23.515 [2024-07-15 20:51:57.869051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.869069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.877413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f20d8 00:26:23.515 [2024-07-15 20:51:57.878187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.878205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.886592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7538 00:26:23.515 [2024-07-15 20:51:57.887369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.887387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.895786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f0ff8 00:26:23.515 [2024-07-15 20:51:57.896534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.896551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.904927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e99d8 00:26:23.515 [2024-07-15 20:51:57.905690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.905708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.914115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f6cc8 00:26:23.515 [2024-07-15 20:51:57.914862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.914879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.923287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8e88 00:26:23.515 [2024-07-15 20:51:57.924050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.924067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.932433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7da8 00:26:23.515 [2024-07-15 20:51:57.933155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.933172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.941877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fd208 00:26:23.515 [2024-07-15 20:51:57.942561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.942578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.951021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ff3c8 00:26:23.515 [2024-07-15 20:51:57.951698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.951716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.960167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eee38 00:26:23.515 [2024-07-15 20:51:57.960846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.960864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.969322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e38d0 00:26:23.515 [2024-07-15 20:51:57.970060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.970077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.978428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ebfd0 00:26:23.515 [2024-07-15 20:51:57.979208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.979229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:51:57.987622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fc560 00:26:23.515 [2024-07-15 20:51:57.988400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:51:57.988417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:57.996795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e0a68 00:26:23.775 [2024-07-15 20:51:57.997524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:57.997541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.006019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e3060 00:26:23.775 [2024-07-15 20:51:58.006701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.006718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.014580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e6738 00:26:23.775 [2024-07-15 20:51:58.015236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.015255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.024193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ebb98 00:26:23.775 [2024-07-15 20:51:58.024971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.024990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.033779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e5a90 00:26:23.775 [2024-07-15 20:51:58.034669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.034691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.043456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fe720 00:26:23.775 [2024-07-15 20:51:58.044557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.044575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.052808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eaef0 00:26:23.775 [2024-07-15 20:51:58.053958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.053976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.062411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e5658 00:26:23.775 [2024-07-15 20:51:58.063503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.063521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.071535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f1ca0 00:26:23.775 [2024-07-15 20:51:58.072681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.072699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.081144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ec840 00:26:23.775 [2024-07-15 20:51:58.082501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.082518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.089255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f4b08 00:26:23.775 [2024-07-15 20:51:58.089907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.089925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.098550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e5220 00:26:23.775 [2024-07-15 20:51:58.099467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.099486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.108157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f4298 00:26:23.775 [2024-07-15 20:51:58.109184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.109202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.117796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e49b0 00:26:23.775 [2024-07-15 20:51:58.118940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.118958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.125522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ed0b0 00:26:23.775 [2024-07-15 20:51:58.126043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.126060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.135115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190eee38 00:26:23.775 [2024-07-15 20:51:58.135764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.135782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.144701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e6b70 00:26:23.775 [2024-07-15 20:51:58.145476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.145494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.152932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f31b8 00:26:23.775 [2024-07-15 20:51:58.153904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.153921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.163834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f2d80 00:26:23.775 [2024-07-15 20:51:58.165284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.165301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.173406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e84c0 00:26:23.775 [2024-07-15 20:51:58.174979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.174996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.179895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7da8 00:26:23.775 [2024-07-15 20:51:58.180645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.180663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.188656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fcdd0 00:26:23.775 [2024-07-15 20:51:58.189412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.189430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.199783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e4140 00:26:23.775 [2024-07-15 20:51:58.200985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.201002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.207846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7da8 00:26:23.775 [2024-07-15 20:51:58.208350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.208367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.217145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f6cc8 00:26:23.775 [2024-07-15 20:51:58.217983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.218000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.226178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e1b48 00:26:23.775 [2024-07-15 20:51:58.226673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.226691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.237139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ef6a8 00:26:23.775 [2024-07-15 20:51:58.238581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.775 [2024-07-15 20:51:58.238598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.775 [2024-07-15 20:51:58.245631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e01f8 00:26:23.775 [2024-07-15 20:51:58.246609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.776 [2024-07-15 20:51:58.246626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.776 [2024-07-15 20:51:58.254682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f96f8 00:26:23.776 [2024-07-15 20:51:58.255661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.776 [2024-07-15 20:51:58.255679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.034 [2024-07-15 20:51:58.264078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e3498 00:26:24.034 [2024-07-15 20:51:58.265057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-07-15 20:51:58.265075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.034 [2024-07-15 20:51:58.273246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ddc00 00:26:24.034 [2024-07-15 20:51:58.274219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-07-15 20:51:58.274244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.034 [2024-07-15 20:51:58.282391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fb480 00:26:24.034 [2024-07-15 20:51:58.283370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-07-15 20:51:58.283387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.034 [2024-07-15 20:51:58.291556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e23b8 00:26:24.034 [2024-07-15 20:51:58.292529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-07-15 20:51:58.292547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.034 [2024-07-15 20:51:58.300682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fcdd0 00:26:24.034 [2024-07-15 20:51:58.301662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-07-15 20:51:58.301680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.034 [2024-07-15 20:51:58.309933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e73e0 00:26:24.034 [2024-07-15 20:51:58.310906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-07-15 20:51:58.310923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.034 [2024-07-15 20:51:58.319084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ecc78 00:26:24.034 [2024-07-15 20:51:58.320057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.034 [2024-07-15 20:51:58.320075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.034 [2024-07-15 20:51:58.328233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e9e10 00:26:24.034 [2024-07-15 20:51:58.329199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-07-15 20:51:58.329217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.035 [2024-07-15 20:51:58.337405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190ec840 00:26:24.035 [2024-07-15 20:51:58.338382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-07-15 20:51:58.338400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.035 [2024-07-15 20:51:58.346575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190fb8b8 00:26:24.035 [2024-07-15 20:51:58.347551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-07-15 20:51:58.347569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.035 [2024-07-15 20:51:58.355732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190e6fa8 00:26:24.035 [2024-07-15 20:51:58.356709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-07-15 20:51:58.356727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.035 [2024-07-15 20:51:58.364788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f8e88 00:26:24.035 [2024-07-15 20:51:58.365765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-07-15 20:51:58.365782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.035 [2024-07-15 20:51:58.373931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd4d0) with pdu=0x2000190f7da8 00:26:24.035 [2024-07-15 20:51:58.374905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.035 [2024-07-15 20:51:58.374922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.035 00:26:24.035 Latency(us) 00:26:24.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.035 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:24.035 nvme0n1 : 2.00 27750.79 108.40 0.00 0.00 4606.75 1809.36 14816.83 00:26:24.035 =================================================================================================================== 00:26:24.035 Total : 27750.79 108.40 0.00 0.00 4606.75 1809.36 14816.83 00:26:24.035 0 00:26:24.035 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:24.035 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:24.035 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:24.035 | .driver_specific 00:26:24.035 | .nvme_error 00:26:24.035 | .status_code 00:26:24.035 | .command_transient_transport_error' 00:26:24.035 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2835007 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2835007 ']' 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2835007 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2835007 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2835007' 00:26:24.293 killing process with pid 2835007 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2835007 00:26:24.293 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.293 00:26:24.293 Latency(us) 00:26:24.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.293 =================================================================================================================== 00:26:24.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.293 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2835007 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2835619 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2835619 /var/tmp/bperf.sock 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2835619 ']' 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:24.552 20:51:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.552 [2024-07-15 20:51:58.833510] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:24.552 [2024-07-15 20:51:58.833558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835619 ] 00:26:24.552 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.552 Zero copy mechanism will not be used. 00:26:24.552 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.552 [2024-07-15 20:51:58.887506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.552 [2024-07-15 20:51:58.965324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.490 20:51:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.748 nvme0n1 00:26:25.748 20:52:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:25.748 20:52:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.748 20:52:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.748 20:52:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.748 20:52:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:25.748 20:52:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.007 Zero copy mechanism will not be used. 00:26:26.007 Running I/O for 2 seconds... 00:26:26.007 [2024-07-15 20:52:00.316385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.316803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.316831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.325217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.325617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.325639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.333854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.334254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.334275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.342347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.342740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.342760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.350448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.350843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.350862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.358623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.358999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.359019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.366532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.366905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.366929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.374968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.375347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.375367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.383083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.383484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.383503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.391496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.391873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.391892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.400048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.400455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.400473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.407953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.408321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.408340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.416088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.416501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.416520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.422908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.423020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.423039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.431409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.431804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.431823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.438021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.438395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.438414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.446066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.446455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.446473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.454218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.454614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.454633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.462171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.462566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.462585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.468988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.469385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.469404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.475929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.476018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.476036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.007 [2024-07-15 20:52:00.483560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.007 [2024-07-15 20:52:00.483948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.007 [2024-07-15 20:52:00.483966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.490461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.490535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.490554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.497721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.498102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.498121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.504416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.504774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.504794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.510778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.511146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.511165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.517149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.517545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.517563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.523569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.523937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.523956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.530168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.530528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.530546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.536754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.537101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.537119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.542960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.543329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.543348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.549520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.549918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.549937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.556192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.556558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.556580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.562965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.563363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.563382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.569275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.569660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.569679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.575863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.576249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.576268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.582549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.582912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.266 [2024-07-15 20:52:00.582931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.266 [2024-07-15 20:52:00.588896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.266 [2024-07-15 20:52:00.589265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.589284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.594729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.595094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.595113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.600896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.601255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.601274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.606736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.607119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.607137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.613003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.613375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.613394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.619478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.619847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.619866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.624810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.625166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.625184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.629983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.630382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.630401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.635659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.636013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.636031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.640279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.640639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.640658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.644997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.645352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.645371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.649790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.650136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.650154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.654416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.654774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.654793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.658996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.659355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.659373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.663614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.663973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.663991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.668182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.668528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.668547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.672709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.673073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.673091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.677416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.677776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.677795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.682020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.682382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.682401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.686638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.686988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.687005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.691519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.691876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.691895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.696529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.696879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.696901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.701158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.701525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.701543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.705801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.706151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.706170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.710389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.710736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.710756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.715040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.715406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.715425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.719669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.719993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.720012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.724205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.724559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.724578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.728759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.267 [2024-07-15 20:52:00.729097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.267 [2024-07-15 20:52:00.729117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.267 [2024-07-15 20:52:00.733298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.268 [2024-07-15 20:52:00.733639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.268 [2024-07-15 20:52:00.733658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.268 [2024-07-15 20:52:00.737928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.268 [2024-07-15 20:52:00.738294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.268 [2024-07-15 20:52:00.738314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.268 [2024-07-15 20:52:00.742584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.268 [2024-07-15 20:52:00.742920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.268 [2024-07-15 20:52:00.742940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.268 [2024-07-15 20:52:00.747644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.268 [2024-07-15 20:52:00.747991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.268 [2024-07-15 20:52:00.748010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.752300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.752634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.752653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.756881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.757237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.757257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.761971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.762318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.762337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.768532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.768960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.768979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.775753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.776135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.776155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.782633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.783034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.783053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.789264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.789648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.789668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.796686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.797103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.797121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.805189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.805631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.805650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.813888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.814355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.814374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.822315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.822757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.526 [2024-07-15 20:52:00.822775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.526 [2024-07-15 20:52:00.831121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.526 [2024-07-15 20:52:00.831527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.831545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.839803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.840267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.840287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.848338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.848701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.848719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.856779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.857193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.857215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.865338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.865778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.865796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.873636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.874058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.874076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.882789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.883239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.883258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.891355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.891741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.891759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.900017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.900492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.900510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.908368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.908742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.908761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.916962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.917404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.917422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.925748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.926092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.926111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.933311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.933696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.933715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.940894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.941345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.941364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.948661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.949113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.949132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.956424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.956852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.956870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.964249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.964661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.964680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.971808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.972216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.972240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.979707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.980063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.980081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.985792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.986142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.986160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.992141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.992539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.992558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:00.998738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:00.999075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:00.999094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.527 [2024-07-15 20:52:01.004283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.527 [2024-07-15 20:52:01.004628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.527 [2024-07-15 20:52:01.004647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.786 [2024-07-15 20:52:01.010057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.786 [2024-07-15 20:52:01.010390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-07-15 20:52:01.010409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.786 [2024-07-15 20:52:01.015526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.786 [2024-07-15 20:52:01.015870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-07-15 20:52:01.015888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.786 [2024-07-15 20:52:01.020940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.786 [2024-07-15 20:52:01.021274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.786 [2024-07-15 20:52:01.021293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.786 [2024-07-15 20:52:01.026186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.786 [2024-07-15 20:52:01.026526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.026545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.031521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.031845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.031864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.036929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.037279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.037298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.041785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.042136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.042159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.046824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.047161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.047180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.052274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.052637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.052656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.058387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.058727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.058745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.063561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.063898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.063916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.068732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.069070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.069089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.074131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.074474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.074493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.079464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.079802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.079820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.084411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.084750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.084769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.089003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.089355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.089374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.093576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.093909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.093928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.098185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.098536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.098555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.102779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.103109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.103127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.107343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.107689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.107708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.111961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.112304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.112323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.116476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.116818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.116836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.120996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.121335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.121354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.125530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.125867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.125885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.130080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.130425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.130444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.134782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.135115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.135134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.139966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.140316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.140335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.144895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.145245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.145264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.149528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.149874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.149893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.154013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.154344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.154363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.158479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.158814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.787 [2024-07-15 20:52:01.158833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.787 [2024-07-15 20:52:01.162965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.787 [2024-07-15 20:52:01.163292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.163310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.167462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.167795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.167817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.171897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.172240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.172259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.176344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.176674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.176692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.180811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.181149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.185233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.185572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.185591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.189694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.190025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.190043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.194155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.194483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.194501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.198634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.198975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.198994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.203107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.203436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.203454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.207527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.207847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.207865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.211906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.212218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.212244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.216263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.216594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.216612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.220666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.221004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.221023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.225073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.225407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.225425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.229463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.229794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.229812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.233900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.234235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.234253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.238385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.238696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.238714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.242857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.243184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.243202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.247318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.247645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.247663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.252805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.253219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.253244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.259556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.259984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.260003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.788 [2024-07-15 20:52:01.266917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:26.788 [2024-07-15 20:52:01.267359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.788 [2024-07-15 20:52:01.267377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.274586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.274999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.275016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.281927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.282376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.282395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.289767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.290171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.290189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.297483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.297908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.297927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.305296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.305707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.305729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.314417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.315024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.315042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.325038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.325514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.325532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.333857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.334309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.334328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.342107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.342599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.047 [2024-07-15 20:52:01.342618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.047 [2024-07-15 20:52:01.350099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.047 [2024-07-15 20:52:01.350646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.350664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.357039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.357416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.362935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.363424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.363442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.370534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.370889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.370907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.381485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.381969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.381988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.388734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.389083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.389101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.394585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.394915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.394933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.400732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.401072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.401090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.405530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.405853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.405871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.410561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.410899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.410917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.416454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.416773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.416792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.421076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.421448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.421466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.426006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.426337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.426358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.430820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.431148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.431167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.435847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.436182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.436200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.440312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.440646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.440665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.444857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.445196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.445214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.449263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.449613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.449631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.453706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.454028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.454046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.458234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.458552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.458572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.462463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.462775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.462794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.466738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.467056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.467075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.471429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.471741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.471760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.476415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.476724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.476742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.481258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.481612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.481630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.487051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.487364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.487383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.493093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.493409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.493427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.498901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.499204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.499229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.504600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.048 [2024-07-15 20:52:01.504911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.048 [2024-07-15 20:52:01.504930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.048 [2024-07-15 20:52:01.509611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.049 [2024-07-15 20:52:01.509918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.049 [2024-07-15 20:52:01.509936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.049 [2024-07-15 20:52:01.517092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.049 [2024-07-15 20:52:01.517577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.049 [2024-07-15 20:52:01.517596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.049 [2024-07-15 20:52:01.525897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.049 [2024-07-15 20:52:01.526276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.049 [2024-07-15 20:52:01.526295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.532583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.532900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.532918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.541025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.541441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.541460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.547614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.547923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.547941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.553520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.553845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.553863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.559808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.560124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.560142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.566629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.566990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.567009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.575454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.575822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.575844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.581968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.582305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.582323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.587839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.588193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.588212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.594118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.594435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.594454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.600105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.600424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.600443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.608131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.608587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.608605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.618060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.618385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.618404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.625628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.625951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.625970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.630962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.631286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.631304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.635932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.636249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.636267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.640299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.640619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.640637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.645408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.645726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.645744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.650523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.650834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.650852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.655653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.655978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.655996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.662248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.662561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.662580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.668272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.668585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.668604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.672941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.673262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.673280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.677854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.678169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.678187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.682828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.683138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.683156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.687109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.687425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.687443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.692197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.692560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.692579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.697426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.697736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.697754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.703083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.703397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.703415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.707719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.708035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.708054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.712389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.712695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.712714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.717679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.718014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.718033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.727996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.728413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.728443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.734755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.735129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.308 [2024-07-15 20:52:01.735148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.308 [2024-07-15 20:52:01.741015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.308 [2024-07-15 20:52:01.741321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.741340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.747165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.747484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.747502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.753304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.753620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.753638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.758850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.759151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.759169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.764193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.764513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.764532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.768506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.768812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.768830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.773196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.773520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.773539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.777621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.777932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.777950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.782557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.782862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.782879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.309 [2024-07-15 20:52:01.787139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.309 [2024-07-15 20:52:01.787458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.309 [2024-07-15 20:52:01.787477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.791559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.791873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.791891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.796381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.796689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.796707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.800930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.801244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.801262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.806281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.806597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.806615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.810560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.810873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.810892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.814754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.815065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.815082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.819540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.820015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.820033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.823847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.824118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.824137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.829595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.830054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.830073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.839886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.840196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.840215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.846376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.846701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.846720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.852553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.852924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.852942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.860012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.860360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.860379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.866260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.866585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.866604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.870705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.870947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.870968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.875246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.875521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.875540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.880055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.880309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.880327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.620 [2024-07-15 20:52:01.885339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.620 [2024-07-15 20:52:01.885602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.620 [2024-07-15 20:52:01.885621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.889872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.890110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.890128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.894203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.894466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.894484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.898853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.899091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.899109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.903951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.904192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.904210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.909067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.909318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.909336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.912967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.913220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.913244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.917172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.917416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.917435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.921402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.921656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.921675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.926173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.926430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.926448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.930665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.930907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.930925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.934434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.934668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.934686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.938185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.938436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.938454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.941927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.942173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.942191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.946584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.946948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.946966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.951622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.951871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.951889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.956441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.956720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.956738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.960886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.961125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.961143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.965455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.965733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.965751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.970105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.970452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.970470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.975430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.975809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.975828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.981016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.981368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.981388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.986749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.987100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.987119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.992529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.992839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.992860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:01.999200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.621 [2024-07-15 20:52:01.999578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.621 [2024-07-15 20:52:01.999596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.621 [2024-07-15 20:52:02.005420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.005718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.005736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.011319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.011624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.011642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.016264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.016505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.016523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.021292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.021529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.021547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.026807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.027107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.027126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.032266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.032507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.032525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.036952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.037213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.037247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.041395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.041646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.041664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.045696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.045944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.045963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.050070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.050318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.050336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.055538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.055787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.055806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.060484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.060774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.060793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.066724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.067035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.067054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.073566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.073843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.073861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.079065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.079411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.079429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.083622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.083905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.083925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.088428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.088682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.088701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.093078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.093382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.093402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.622 [2024-07-15 20:52:02.099074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.622 [2024-07-15 20:52:02.099440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.622 [2024-07-15 20:52:02.099459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.105117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.105423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.105442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.110186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.110485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.110503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.114953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.115305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.115323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.119696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.119945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.119963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.123756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.124013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.124032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.127758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.128012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.128034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.131792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.132042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.132060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.135768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.136024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.136043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.139770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.140024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.140043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.143798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.144052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.144071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.147791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.148041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.148061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.151802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.881 [2024-07-15 20:52:02.152040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.881 [2024-07-15 20:52:02.152059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.881 [2024-07-15 20:52:02.156561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.156819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.156837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.160727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.160974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.160993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.164711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.164971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.164990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.168729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.168977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.168996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.172943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.173197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.173216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.177809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.178088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.178107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.182780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.183057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.183075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.187151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.187395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.187415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.191497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.191753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.191772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.195701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.195948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.195967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.200641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.200894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.200913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.205854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.206107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.206126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.210746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.210983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.211001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.215238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.215491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.215510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.219623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.219878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.219897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.224112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.224357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.224376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.228621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.228873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.228891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.232793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.233043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.233061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.237022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.237275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.237294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.241580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.241832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.241857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.247352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.247611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.247629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.251722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.251973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.251991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.255959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.256234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.256253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.260337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.260584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.260602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.264703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.264968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.264986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.268866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.268943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.268961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.273382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.273633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.273652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.277543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.277647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.277664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.281785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.282032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.282050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.285908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.286012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.286029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.290497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.290774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.290792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.295104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.295345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.295364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.299430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.299671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.299689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.303760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.304001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.304019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.882 [2024-07-15 20:52:02.308300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfd810) with pdu=0x2000190fef90 00:26:27.882 [2024-07-15 20:52:02.308560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.882 [2024-07-15 20:52:02.308579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.882 00:26:27.882 Latency(us) 00:26:27.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.882 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:27.882 nvme0n1 : 2.00 5447.71 680.96 0.00 0.00 2932.91 1795.12 11682.50 00:26:27.882 =================================================================================================================== 00:26:27.882 Total : 5447.71 680.96 0.00 0.00 2932.91 1795.12 11682.50 00:26:27.882 0 00:26:27.882 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:27.882 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:27.882 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:27.882 | .driver_specific 00:26:27.882 | .nvme_error 00:26:27.882 | .status_code 00:26:27.882 | .command_transient_transport_error' 00:26:27.882 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 351 > 0 )) 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2835619 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2835619 ']' 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2835619 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2835619 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2835619' 00:26:28.139 killing process with pid 2835619 00:26:28.139 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2835619 00:26:28.139 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.139 00:26:28.139 Latency(us) 00:26:28.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.139 =================================================================================================================== 00:26:28.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.140 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2835619 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2833546 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2833546 ']' 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2833546 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833546 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833546' 00:26:28.398 killing process with pid 2833546 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2833546 00:26:28.398 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2833546 00:26:28.656 00:26:28.656 real 0m16.641s 00:26:28.656 user 0m32.025s 00:26:28.656 sys 0m4.188s 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.656 ************************************ 00:26:28.656 END TEST nvmf_digest_error 00:26:28.656 ************************************ 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:28.656 20:52:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:28.656 rmmod nvme_tcp 00:26:28.656 rmmod nvme_fabrics 00:26:28.656 rmmod nvme_keyring 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2833546 ']' 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2833546 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2833546 ']' 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2833546 00:26:28.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2833546) - No such process 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2833546 is not found' 00:26:28.656 Process with pid 2833546 is not found 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.656 20:52:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.192 20:52:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.192 00:26:31.192 real 0m40.767s 00:26:31.192 user 1m5.800s 00:26:31.192 sys 0m12.325s 00:26:31.192 20:52:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:31.192 20:52:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.192 ************************************ 00:26:31.192 END TEST nvmf_digest 00:26:31.192 ************************************ 00:26:31.192 20:52:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:31.192 20:52:05 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:31.192 20:52:05 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:31.192 20:52:05 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:31.192 20:52:05 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:31.192 20:52:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:31.192 20:52:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.192 20:52:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.192 ************************************ 00:26:31.192 START TEST nvmf_bdevperf 00:26:31.192 ************************************ 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:31.192 * Looking for test storage... 00:26:31.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:31.192 20:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:36.516 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:36.516 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:36.516 Found net devices under 0000:86:00.0: cvl_0_0 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:36.516 Found net devices under 0000:86:00.1: cvl_0_1 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:36.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:26:36.516 00:26:36.516 --- 10.0.0.2 ping statistics --- 00:26:36.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.516 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:26:36.516 00:26:36.516 --- 10.0.0.1 ping statistics --- 00:26:36.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.516 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2839741 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2839741 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2839741 ']' 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:36.516 20:52:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.516 [2024-07-15 20:52:10.742588] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:36.516 [2024-07-15 20:52:10.742635] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.516 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.516 [2024-07-15 20:52:10.805391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.516 [2024-07-15 20:52:10.887012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.516 [2024-07-15 20:52:10.887049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.516 [2024-07-15 20:52:10.887056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.516 [2024-07-15 20:52:10.887062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.516 [2024-07-15 20:52:10.887067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.517 [2024-07-15 20:52:10.887170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.517 [2024-07-15 20:52:10.887274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.517 [2024-07-15 20:52:10.887275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.083 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:37.083 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:37.083 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:37.083 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:37.083 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.342 [2024-07-15 20:52:11.592128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.342 Malloc0 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.342 [2024-07-15 20:52:11.660605] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:37.342 { 00:26:37.342 "params": { 00:26:37.342 "name": "Nvme$subsystem", 00:26:37.342 "trtype": "$TEST_TRANSPORT", 00:26:37.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:37.342 "adrfam": "ipv4", 00:26:37.342 "trsvcid": "$NVMF_PORT", 00:26:37.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:37.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:37.342 "hdgst": ${hdgst:-false}, 00:26:37.342 "ddgst": ${ddgst:-false} 00:26:37.342 }, 00:26:37.342 "method": "bdev_nvme_attach_controller" 00:26:37.342 } 00:26:37.342 EOF 00:26:37.342 )") 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:37.342 20:52:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:37.342 "params": { 00:26:37.342 "name": "Nvme1", 00:26:37.342 "trtype": "tcp", 00:26:37.342 "traddr": "10.0.0.2", 00:26:37.342 "adrfam": "ipv4", 00:26:37.342 "trsvcid": "4420", 00:26:37.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:37.342 "hdgst": false, 00:26:37.342 "ddgst": false 00:26:37.342 }, 00:26:37.342 "method": "bdev_nvme_attach_controller" 00:26:37.342 }' 00:26:37.342 [2024-07-15 20:52:11.711508] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:37.342 [2024-07-15 20:52:11.711550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839876 ] 00:26:37.342 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.342 [2024-07-15 20:52:11.765970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.601 [2024-07-15 20:52:11.845859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.601 Running I/O for 1 seconds... 00:26:38.979 00:26:38.979 Latency(us) 00:26:38.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.979 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:38.979 Verification LBA range: start 0x0 length 0x4000 00:26:38.979 Nvme1n1 : 1.01 10974.28 42.87 0.00 0.00 11618.41 2535.96 14075.99 00:26:38.979 =================================================================================================================== 00:26:38.979 Total : 10974.28 42.87 0.00 0.00 11618.41 2535.96 14075.99 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2840113 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.979 { 00:26:38.979 "params": { 00:26:38.979 "name": "Nvme$subsystem", 00:26:38.979 "trtype": "$TEST_TRANSPORT", 00:26:38.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.979 "adrfam": "ipv4", 00:26:38.979 "trsvcid": "$NVMF_PORT", 00:26:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.979 "hdgst": ${hdgst:-false}, 00:26:38.979 "ddgst": ${ddgst:-false} 00:26:38.979 }, 00:26:38.979 "method": "bdev_nvme_attach_controller" 00:26:38.979 } 00:26:38.979 EOF 00:26:38.979 )") 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:38.979 20:52:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:38.979 "params": { 00:26:38.979 "name": "Nvme1", 00:26:38.979 "trtype": "tcp", 00:26:38.979 "traddr": "10.0.0.2", 00:26:38.979 "adrfam": "ipv4", 00:26:38.979 "trsvcid": "4420", 00:26:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.979 "hdgst": false, 00:26:38.979 "ddgst": false 00:26:38.979 }, 00:26:38.979 "method": "bdev_nvme_attach_controller" 00:26:38.979 }' 00:26:38.979 [2024-07-15 20:52:13.282296] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:38.979 [2024-07-15 20:52:13.282344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840113 ] 00:26:38.979 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.979 [2024-07-15 20:52:13.336500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.979 [2024-07-15 20:52:13.405880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.236 Running I/O for 15 seconds... 00:26:41.767 20:52:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2839741 00:26:41.767 20:52:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:42.029 [2024-07-15 20:52:16.259972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.029 [2024-07-15 20:52:16.260011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.029 [2024-07-15 20:52:16.260031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.029 [2024-07-15 20:52:16.260040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.029 [2024-07-15 20:52:16.260050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.029 [2024-07-15 20:52:16.260058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.029 [2024-07-15 20:52:16.260069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.029 [2024-07-15 20:52:16.260077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.029 [2024-07-15 20:52:16.260086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.029 [2024-07-15 20:52:16.260093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.029 [2024-07-15 20:52:16.260102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.029 [2024-07-15 20:52:16.260110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.030 [2024-07-15 20:52:16.260853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.030 [2024-07-15 20:52:16.260860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.260874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.260890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.260904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.260919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.260933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.260947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.260961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-15 20:52:16.260976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.260990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.260998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.031 [2024-07-15 20:52:16.261387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-15 20:52:16.261402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-15 20:52:16.261417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-15 20:52:16.261431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-15 20:52:16.261446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-15 20:52:16.261460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-15 20:52:16.261474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.031 [2024-07-15 20:52:16.261482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-15 20:52:16.261488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-15 20:52:16.261620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.261986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.261994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.262000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.262009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.032 [2024-07-15 20:52:16.262015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.262022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdc70 is same with the state(5) to be set 00:26:42.032 [2024-07-15 20:52:16.262029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:42.032 [2024-07-15 20:52:16.262034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:42.032 [2024-07-15 20:52:16.262040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101976 len:8 PRP1 0x0 PRP2 0x0 00:26:42.032 [2024-07-15 20:52:16.262048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.032 [2024-07-15 20:52:16.262090] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14fdc70 was disconnected and freed. reset controller. 00:26:42.032 [2024-07-15 20:52:16.264919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.032 [2024-07-15 20:52:16.264970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.032 [2024-07-15 20:52:16.265556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.032 [2024-07-15 20:52:16.265571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.032 [2024-07-15 20:52:16.265578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.032 [2024-07-15 20:52:16.265756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.032 [2024-07-15 20:52:16.265933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.032 [2024-07-15 20:52:16.265941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.032 [2024-07-15 20:52:16.265951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.268789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.278316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.278797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.278815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.278822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.278999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.279176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.279184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.279191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.282025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.291391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.291830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.291847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.291853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.292031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.292209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.292217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.292223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.295053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.304576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.305027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.305043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.305050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.305235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.305415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.305423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.305429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.308261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.317773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.318251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.318271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.318278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.318455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.318632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.318640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.318646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.321474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.330810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.331264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.331280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.331287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.331465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.331646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.331654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.331660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.334494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.344005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.344458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.344474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.344481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.344657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.344835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.344842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.344849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.347678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.357181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.357669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.357686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.357693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.357869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.358049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.358056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.358063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.360893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.370287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.370764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.370781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.370789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.370966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.371142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.371151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.371160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.374024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.383407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.383881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.383897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.383904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.384080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.384263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.384270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.384277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.387104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.396513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.033 [2024-07-15 20:52:16.396996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.033 [2024-07-15 20:52:16.397012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.033 [2024-07-15 20:52:16.397019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.033 [2024-07-15 20:52:16.397195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.033 [2024-07-15 20:52:16.397377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.033 [2024-07-15 20:52:16.397385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.033 [2024-07-15 20:52:16.397392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.033 [2024-07-15 20:52:16.400217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.033 [2024-07-15 20:52:16.409562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.034 [2024-07-15 20:52:16.410011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.034 [2024-07-15 20:52:16.410027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.034 [2024-07-15 20:52:16.410034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.034 [2024-07-15 20:52:16.410211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.034 [2024-07-15 20:52:16.410394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.034 [2024-07-15 20:52:16.410401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.034 [2024-07-15 20:52:16.410408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.034 [2024-07-15 20:52:16.413238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.034 [2024-07-15 20:52:16.422754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.034 [2024-07-15 20:52:16.423179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.034 [2024-07-15 20:52:16.423194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.034 [2024-07-15 20:52:16.423201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.034 [2024-07-15 20:52:16.423383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.034 [2024-07-15 20:52:16.423561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.034 [2024-07-15 20:52:16.423568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.034 [2024-07-15 20:52:16.423575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.034 [2024-07-15 20:52:16.426405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.034 [2024-07-15 20:52:16.435928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.034 [2024-07-15 20:52:16.436332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.034 [2024-07-15 20:52:16.436349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.034 [2024-07-15 20:52:16.436356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.034 [2024-07-15 20:52:16.436532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.034 [2024-07-15 20:52:16.436708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.034 [2024-07-15 20:52:16.436716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.034 [2024-07-15 20:52:16.436722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.034 [2024-07-15 20:52:16.439551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.034 [2024-07-15 20:52:16.449069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.034 [2024-07-15 20:52:16.449498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.034 [2024-07-15 20:52:16.449515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.034 [2024-07-15 20:52:16.449525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.034 [2024-07-15 20:52:16.449702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.034 [2024-07-15 20:52:16.449878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.034 [2024-07-15 20:52:16.449886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.034 [2024-07-15 20:52:16.449892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.034 [2024-07-15 20:52:16.452720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.034 [2024-07-15 20:52:16.462120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.034 [2024-07-15 20:52:16.462514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.034 [2024-07-15 20:52:16.462530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.034 [2024-07-15 20:52:16.462537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.034 [2024-07-15 20:52:16.462708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.034 [2024-07-15 20:52:16.462879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.034 [2024-07-15 20:52:16.462886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.034 [2024-07-15 20:52:16.462892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.034 [2024-07-15 20:52:16.465642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.034 [2024-07-15 20:52:16.475098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.034 [2024-07-15 20:52:16.475605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.034 [2024-07-15 20:52:16.475646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.034 [2024-07-15 20:52:16.475667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.034 [2024-07-15 20:52:16.476258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.034 [2024-07-15 20:52:16.476456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.034 [2024-07-15 20:52:16.476463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.034 [2024-07-15 20:52:16.476469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.034 [2024-07-15 20:52:16.479206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.034 [2024-07-15 20:52:16.488064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.034 [2024-07-15 20:52:16.488418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.034 [2024-07-15 20:52:16.488470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.034 [2024-07-15 20:52:16.488492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.034 [2024-07-15 20:52:16.489069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.034 [2024-07-15 20:52:16.489565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.034 [2024-07-15 20:52:16.489577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.034 [2024-07-15 20:52:16.489583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.034 [2024-07-15 20:52:16.492286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.034 [2024-07-15 20:52:16.500995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.034 [2024-07-15 20:52:16.501431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.034 [2024-07-15 20:52:16.501447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.034 [2024-07-15 20:52:16.501454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.034 [2024-07-15 20:52:16.501625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.034 [2024-07-15 20:52:16.501797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.034 [2024-07-15 20:52:16.501804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.034 [2024-07-15 20:52:16.501811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.034 [2024-07-15 20:52:16.504517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.295 [2024-07-15 20:52:16.513988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.295 [2024-07-15 20:52:16.514408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.295 [2024-07-15 20:52:16.514424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.295 [2024-07-15 20:52:16.514431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.295 [2024-07-15 20:52:16.514594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.295 [2024-07-15 20:52:16.514756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.295 [2024-07-15 20:52:16.514763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.295 [2024-07-15 20:52:16.514769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.295 [2024-07-15 20:52:16.517467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.295 [2024-07-15 20:52:16.527001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.295 [2024-07-15 20:52:16.527489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.295 [2024-07-15 20:52:16.527542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.295 [2024-07-15 20:52:16.527564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.295 [2024-07-15 20:52:16.528142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.295 [2024-07-15 20:52:16.528619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.295 [2024-07-15 20:52:16.528627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.528633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.531340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.539955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.540429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.540472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.540494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.541072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.541387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.541395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.541401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.544149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.552906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.553399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.553441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.553463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.554041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.554324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.554336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.554345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.558404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.566436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.566889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.566905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.566912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.567088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.567269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.567278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.567284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.570104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.579611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.580069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.580084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.580094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.580275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.580452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.580460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.580466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.583294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.592758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.593235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.593251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.593259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.593440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.593628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.593636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.593642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.596473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.605881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.606340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.606356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.606363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.606539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.606716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.606724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.606730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.609561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.618923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.619297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.619314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.619320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.619497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.619673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.619684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.619690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.622519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.631991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.632457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.632473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.632480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.632656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.632832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.632840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.632846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.635649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.644999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.645428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.645444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.645451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.645623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.645794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.645802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.645807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.648556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.657913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.658308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.658323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.658330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.658505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.658667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.658674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.296 [2024-07-15 20:52:16.658680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.296 [2024-07-15 20:52:16.661369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.296 [2024-07-15 20:52:16.670731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.296 [2024-07-15 20:52:16.671213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.296 [2024-07-15 20:52:16.671267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.296 [2024-07-15 20:52:16.671289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.296 [2024-07-15 20:52:16.671785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.296 [2024-07-15 20:52:16.671955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.296 [2024-07-15 20:52:16.671962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.671969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.297 [2024-07-15 20:52:16.674658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.297 [2024-07-15 20:52:16.683560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.297 [2024-07-15 20:52:16.684031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.297 [2024-07-15 20:52:16.684072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.297 [2024-07-15 20:52:16.684093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.297 [2024-07-15 20:52:16.684661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.297 [2024-07-15 20:52:16.684833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.297 [2024-07-15 20:52:16.684840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.684847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.297 [2024-07-15 20:52:16.687520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.297 [2024-07-15 20:52:16.696485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.297 [2024-07-15 20:52:16.696916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.297 [2024-07-15 20:52:16.696931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.297 [2024-07-15 20:52:16.696937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.297 [2024-07-15 20:52:16.697099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.297 [2024-07-15 20:52:16.697284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.297 [2024-07-15 20:52:16.697291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.697298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.297 [2024-07-15 20:52:16.699962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.297 [2024-07-15 20:52:16.709367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.297 [2024-07-15 20:52:16.709747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.297 [2024-07-15 20:52:16.709762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.297 [2024-07-15 20:52:16.709769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.297 [2024-07-15 20:52:16.709944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.297 [2024-07-15 20:52:16.710116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.297 [2024-07-15 20:52:16.710123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.710129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.297 [2024-07-15 20:52:16.712811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.297 [2024-07-15 20:52:16.722176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.297 [2024-07-15 20:52:16.722652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.297 [2024-07-15 20:52:16.722694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.297 [2024-07-15 20:52:16.722716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.297 [2024-07-15 20:52:16.723158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.297 [2024-07-15 20:52:16.723335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.297 [2024-07-15 20:52:16.723343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.723349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.297 [2024-07-15 20:52:16.726017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.297 [2024-07-15 20:52:16.735034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.297 [2024-07-15 20:52:16.735505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.297 [2024-07-15 20:52:16.735546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.297 [2024-07-15 20:52:16.735568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.297 [2024-07-15 20:52:16.736107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.297 [2024-07-15 20:52:16.736299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.297 [2024-07-15 20:52:16.736308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.736314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.297 [2024-07-15 20:52:16.740257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.297 [2024-07-15 20:52:16.748583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.297 [2024-07-15 20:52:16.748991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.297 [2024-07-15 20:52:16.749032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.297 [2024-07-15 20:52:16.749053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.297 [2024-07-15 20:52:16.749646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.297 [2024-07-15 20:52:16.750138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.297 [2024-07-15 20:52:16.750146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.750155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.297 [2024-07-15 20:52:16.752882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.297 [2024-07-15 20:52:16.761403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.297 [2024-07-15 20:52:16.761850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.297 [2024-07-15 20:52:16.761891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.297 [2024-07-15 20:52:16.761913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.297 [2024-07-15 20:52:16.762353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.297 [2024-07-15 20:52:16.762525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.297 [2024-07-15 20:52:16.762533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.762539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.297 [2024-07-15 20:52:16.765205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.297 [2024-07-15 20:52:16.774261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.297 [2024-07-15 20:52:16.774747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.297 [2024-07-15 20:52:16.774764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.297 [2024-07-15 20:52:16.774772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.297 [2024-07-15 20:52:16.774943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.297 [2024-07-15 20:52:16.775115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.297 [2024-07-15 20:52:16.775123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.297 [2024-07-15 20:52:16.775129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.777829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.787077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.787561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.787604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.787626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.788146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.788324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.788332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.788339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.791001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.799904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.800410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.800466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.800474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.800652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.800828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.800836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.800842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.803546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.812715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.813145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.813160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.813166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.813355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.813528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.813535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.813541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.816191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.825550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.825959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.825974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.825980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.826142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.826328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.826336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.826342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.829007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.838338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.838804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.838846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.838867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.839311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.839486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.839493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.839499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.842165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.851243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.851711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.851752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.851774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.852327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.852499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.852507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.852513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.855179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.864113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.864582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.864601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.864608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.864787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.864964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.864971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.864978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.867813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.877082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.877543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.877577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.877600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.878179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.878386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.878394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.878401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.881152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.890116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.890592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.890634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.890656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.891249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.891576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.891584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.891590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.894332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.903006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.903474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.903490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.903497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.903669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.903840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.559 [2024-07-15 20:52:16.903847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.559 [2024-07-15 20:52:16.903854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.559 [2024-07-15 20:52:16.906544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.559 [2024-07-15 20:52:16.915799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.559 [2024-07-15 20:52:16.916261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.559 [2024-07-15 20:52:16.916302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.559 [2024-07-15 20:52:16.916324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.559 [2024-07-15 20:52:16.916902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.559 [2024-07-15 20:52:16.917241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:16.917249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:16.917255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:16.919920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:16.928609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:16.929052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:16.929093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:16.929122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:16.929579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:16.929751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:16.929758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:16.929764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:16.932431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:16.941437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:16.941894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:16.941910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:16.941916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:16.942088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:16.942265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:16.942273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:16.942280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:16.945036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:16.954357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:16.954828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:16.954870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:16.954891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:16.955323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:16.955496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:16.955503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:16.955510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:16.958229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:16.967173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:16.967620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:16.967635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:16.967641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:16.967803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:16.967965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:16.967976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:16.967981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:16.970653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:16.979975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:16.980385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:16.980401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:16.980407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:16.980570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:16.980732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:16.980739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:16.980745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:16.983400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:16.992887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:16.993318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:16.993354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:16.993377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:16.993956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:16.994548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:16.994572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:16.994592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:16.997314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:17.005767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:17.006218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:17.006271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:17.006293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:17.006856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:17.007019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:17.007026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:17.007031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:17.009698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:17.018647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:17.019101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:17.019143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:17.019164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:17.019716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:17.019888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:17.019896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:17.019902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:17.022570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.560 [2024-07-15 20:52:17.031568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.560 [2024-07-15 20:52:17.032026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.560 [2024-07-15 20:52:17.032041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.560 [2024-07-15 20:52:17.032048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.560 [2024-07-15 20:52:17.032219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.560 [2024-07-15 20:52:17.032396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.560 [2024-07-15 20:52:17.032404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.560 [2024-07-15 20:52:17.032410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.560 [2024-07-15 20:52:17.035085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.044478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.044951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.044968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.044976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.045151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.045335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.045345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.045351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.048024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.057337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.057801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.057817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.057824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.057999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.058170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.058178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.058184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.060904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.070274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.070730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.070746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.070753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.070925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.071096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.071104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.071110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.073788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.083248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.083719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.083761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.083782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.084373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.084959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.084967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.084973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.087647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.096079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.096523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.096565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.096587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.096974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.097135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.097142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.097151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.099827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.108996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.109450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.109492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.109513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.110017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.110189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.110196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.110202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.112914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.121821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.122276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.122293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.122300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.122476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.122652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.122660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.122666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.125498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.134936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.135329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.135346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.135353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.135535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.135707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.135714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.135720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.138458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.147888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.148349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.148364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.148371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.148541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.148712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.148719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.148725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.151514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.822 [2024-07-15 20:52:17.160795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.822 [2024-07-15 20:52:17.161276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.822 [2024-07-15 20:52:17.161318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.822 [2024-07-15 20:52:17.161340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.822 [2024-07-15 20:52:17.161754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.822 [2024-07-15 20:52:17.162007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.822 [2024-07-15 20:52:17.162018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.822 [2024-07-15 20:52:17.162026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.822 [2024-07-15 20:52:17.166082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.174096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.174549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.174592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.174613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.175082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.175270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.175279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.175285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.177990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.187012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.187468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.187483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.187490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.187664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.187836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.187843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.187849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.190541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.199843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.200298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.200335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.200358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.200895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.201066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.201073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.201079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.203752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.212746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.213178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.213193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.213200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.213390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.213562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.213569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.213575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.216241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.225555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.226019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.226059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.226080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.226674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.227275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.227283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.227292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.229959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.238465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.238946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.238988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.239010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.239495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.239668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.239675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.239681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.242376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.251388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.251758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.251774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.251781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.251952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.252123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.252131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.252138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.254858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.264346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.264831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.264846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.264853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.265025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.265196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.265203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.265210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.267953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.277493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.277973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.278022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.278044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.278636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.279217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.279252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.279258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.282009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.823 [2024-07-15 20:52:17.290513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.823 [2024-07-15 20:52:17.290922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.823 [2024-07-15 20:52:17.290963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:42.823 [2024-07-15 20:52:17.290983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:42.823 [2024-07-15 20:52:17.291492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:42.823 [2024-07-15 20:52:17.291664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.823 [2024-07-15 20:52:17.291671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.823 [2024-07-15 20:52:17.291678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.823 [2024-07-15 20:52:17.294365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.084 [2024-07-15 20:52:17.303514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.084 [2024-07-15 20:52:17.303989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.084 [2024-07-15 20:52:17.304006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.084 [2024-07-15 20:52:17.304013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.084 [2024-07-15 20:52:17.304189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.084 [2024-07-15 20:52:17.304374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.084 [2024-07-15 20:52:17.304383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.084 [2024-07-15 20:52:17.304390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.084 [2024-07-15 20:52:17.307070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.084 [2024-07-15 20:52:17.316551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.084 [2024-07-15 20:52:17.316997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.084 [2024-07-15 20:52:17.317039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.084 [2024-07-15 20:52:17.317061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.084 [2024-07-15 20:52:17.317502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.084 [2024-07-15 20:52:17.317677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.084 [2024-07-15 20:52:17.317684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.084 [2024-07-15 20:52:17.317691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.084 [2024-07-15 20:52:17.320444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.084 [2024-07-15 20:52:17.329421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.084 [2024-07-15 20:52:17.329887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.084 [2024-07-15 20:52:17.329929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.084 [2024-07-15 20:52:17.329951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.330479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.330651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.330658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.330664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.333426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.342317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.342746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.342761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.342767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.342929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.343090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.343097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.343103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.345824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.355193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.355610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.355626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.355633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.355804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.355975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.355982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.355989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.358728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.368071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.368564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.368607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.368629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.369208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.369522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.369530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.369536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.372202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.381250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.381629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.381645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.381652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.381828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.382005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.382012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.382019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.384871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.394283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.394752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.394793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.394815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.395407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.395627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.395634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.395640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.398415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.407151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.407625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.407667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.407695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.408287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.408767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.408776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.408782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.411576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.420070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.420537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.420554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.420561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.420733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.420904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.420912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.420918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.423590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.432949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.433381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.433397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.433404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.433581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.085 [2024-07-15 20:52:17.433743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.085 [2024-07-15 20:52:17.433750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.085 [2024-07-15 20:52:17.433756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.085 [2024-07-15 20:52:17.436442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.085 [2024-07-15 20:52:17.445999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.085 [2024-07-15 20:52:17.446526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.085 [2024-07-15 20:52:17.446541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.085 [2024-07-15 20:52:17.446547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.085 [2024-07-15 20:52:17.446709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.446871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.446884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.446890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.449632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.459027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.459510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.459552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.459574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.086 [2024-07-15 20:52:17.460152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.460744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.460769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.460790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.463505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.471981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.472475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.472517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.472538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.086 [2024-07-15 20:52:17.473030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.473201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.473209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.473217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.475885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.484848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.485336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.485379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.485401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.086 [2024-07-15 20:52:17.485651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.485823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.485830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.485837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.488513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.497807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.498262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.498277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.498284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.086 [2024-07-15 20:52:17.498463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.498625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.498632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.498638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.501328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.510732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.511201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.511216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.511223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.086 [2024-07-15 20:52:17.511433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.511604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.511611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.511618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.514288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.523535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.524002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.524043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.524064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.086 [2024-07-15 20:52:17.524621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.524794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.524801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.524807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.527477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.536311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.536776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.536817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.536838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.086 [2024-07-15 20:52:17.537440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.537982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.537990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.537996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.540621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.549132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.549620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.549661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.549682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.086 [2024-07-15 20:52:17.550065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.086 [2024-07-15 20:52:17.550242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.086 [2024-07-15 20:52:17.550249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.086 [2024-07-15 20:52:17.550255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.086 [2024-07-15 20:52:17.553004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.086 [2024-07-15 20:52:17.561997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.086 [2024-07-15 20:52:17.562468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.086 [2024-07-15 20:52:17.562510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.086 [2024-07-15 20:52:17.562531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.087 [2024-07-15 20:52:17.563068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.087 [2024-07-15 20:52:17.563246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.087 [2024-07-15 20:52:17.563253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.087 [2024-07-15 20:52:17.563259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.347 [2024-07-15 20:52:17.565933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.347 [2024-07-15 20:52:17.574884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.347 [2024-07-15 20:52:17.575395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.347 [2024-07-15 20:52:17.575411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.347 [2024-07-15 20:52:17.575418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.347 [2024-07-15 20:52:17.575589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.347 [2024-07-15 20:52:17.575761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.347 [2024-07-15 20:52:17.575768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.347 [2024-07-15 20:52:17.575778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.347 [2024-07-15 20:52:17.578478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.347 [2024-07-15 20:52:17.587786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.347 [2024-07-15 20:52:17.588254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.347 [2024-07-15 20:52:17.588296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.347 [2024-07-15 20:52:17.588317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.347 [2024-07-15 20:52:17.588895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.347 [2024-07-15 20:52:17.589271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.589279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.589285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.591952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.600610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.601059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.601074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.601080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.601256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.601428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.601435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.601441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.604110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.613799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.614273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.614289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.614297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.614473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.614649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.614657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.614664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.617488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.626830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.627285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.627300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.627307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.627483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.627659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.627667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.627674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.630501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.640019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.640407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.640423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.640430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.640606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.640782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.640790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.640796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.643627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.653148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.653636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.653651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.653658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.653834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.654011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.654019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.654025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.656851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.666203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.666679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.666695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.666702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.666878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.667058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.667066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.667073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.669898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.679277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.679730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.679745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.679752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.679929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.680105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.680113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.680119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.682951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.692315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.692683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.692699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.692706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.692882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.693059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.693067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.693073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.695901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.705412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.705767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.705782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.705788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.705965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.706142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.706149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.706155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.708981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.718499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.718949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.718964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.718971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.719147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.719329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.348 [2024-07-15 20:52:17.719337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.348 [2024-07-15 20:52:17.719343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.348 [2024-07-15 20:52:17.722166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.348 [2024-07-15 20:52:17.731684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.348 [2024-07-15 20:52:17.732160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.348 [2024-07-15 20:52:17.732176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.348 [2024-07-15 20:52:17.732182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.348 [2024-07-15 20:52:17.732364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.348 [2024-07-15 20:52:17.732540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.349 [2024-07-15 20:52:17.732548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.349 [2024-07-15 20:52:17.732554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.349 [2024-07-15 20:52:17.735379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.349 [2024-07-15 20:52:17.744724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.349 [2024-07-15 20:52:17.745116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.349 [2024-07-15 20:52:17.745132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.349 [2024-07-15 20:52:17.745138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.349 [2024-07-15 20:52:17.745320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.349 [2024-07-15 20:52:17.745497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.349 [2024-07-15 20:52:17.745505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.349 [2024-07-15 20:52:17.745511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.349 [2024-07-15 20:52:17.748342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.349 [2024-07-15 20:52:17.757846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.349 [2024-07-15 20:52:17.758325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.349 [2024-07-15 20:52:17.758345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.349 [2024-07-15 20:52:17.758352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.349 [2024-07-15 20:52:17.758528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.349 [2024-07-15 20:52:17.758706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.349 [2024-07-15 20:52:17.758713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.349 [2024-07-15 20:52:17.758720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.349 [2024-07-15 20:52:17.761584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.349 [2024-07-15 20:52:17.770938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.349 [2024-07-15 20:52:17.771419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.349 [2024-07-15 20:52:17.771435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.349 [2024-07-15 20:52:17.771442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.349 [2024-07-15 20:52:17.771618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.349 [2024-07-15 20:52:17.771795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.349 [2024-07-15 20:52:17.771803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.349 [2024-07-15 20:52:17.771809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.349 [2024-07-15 20:52:17.774635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.349 [2024-07-15 20:52:17.783977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.349 [2024-07-15 20:52:17.784414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.349 [2024-07-15 20:52:17.784430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.349 [2024-07-15 20:52:17.784437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.349 [2024-07-15 20:52:17.784613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.349 [2024-07-15 20:52:17.784789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.349 [2024-07-15 20:52:17.784796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.349 [2024-07-15 20:52:17.784803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.349 [2024-07-15 20:52:17.787711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.349 [2024-07-15 20:52:17.797097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.349 [2024-07-15 20:52:17.797596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.349 [2024-07-15 20:52:17.797612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.349 [2024-07-15 20:52:17.797619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.349 [2024-07-15 20:52:17.797795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.349 [2024-07-15 20:52:17.797974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.349 [2024-07-15 20:52:17.797982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.349 [2024-07-15 20:52:17.797988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.349 [2024-07-15 20:52:17.800810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.349 [2024-07-15 20:52:17.810167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.349 [2024-07-15 20:52:17.810652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.349 [2024-07-15 20:52:17.810668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.349 [2024-07-15 20:52:17.810675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.349 [2024-07-15 20:52:17.810850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.349 [2024-07-15 20:52:17.811027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.349 [2024-07-15 20:52:17.811035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.349 [2024-07-15 20:52:17.811041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.349 [2024-07-15 20:52:17.813868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.349 [2024-07-15 20:52:17.823220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.349 [2024-07-15 20:52:17.823626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.349 [2024-07-15 20:52:17.823641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.349 [2024-07-15 20:52:17.823648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.349 [2024-07-15 20:52:17.823824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.349 [2024-07-15 20:52:17.824001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.349 [2024-07-15 20:52:17.824009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.349 [2024-07-15 20:52:17.824015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.349 [2024-07-15 20:52:17.826840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.609 [2024-07-15 20:52:17.836353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.609 [2024-07-15 20:52:17.836804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 20:52:17.836819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.609 [2024-07-15 20:52:17.836826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.609 [2024-07-15 20:52:17.837002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.609 [2024-07-15 20:52:17.837179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.609 [2024-07-15 20:52:17.837187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.609 [2024-07-15 20:52:17.837194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.609 [2024-07-15 20:52:17.840024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.609 [2024-07-15 20:52:17.849543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.609 [2024-07-15 20:52:17.849992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 20:52:17.850008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.609 [2024-07-15 20:52:17.850015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.609 [2024-07-15 20:52:17.850192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.609 [2024-07-15 20:52:17.850372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.609 [2024-07-15 20:52:17.850380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.609 [2024-07-15 20:52:17.850387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.609 [2024-07-15 20:52:17.853210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.609 [2024-07-15 20:52:17.862914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.609 [2024-07-15 20:52:17.863397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-07-15 20:52:17.863414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.609 [2024-07-15 20:52:17.863421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.609 [2024-07-15 20:52:17.863597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.609 [2024-07-15 20:52:17.863774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.863782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.863788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.866618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.875968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.876427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.876443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.876450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.876626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.876803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.876810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.876817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.879650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.889046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.889451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.889467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.889477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.889654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.889831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.889838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.889845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.892673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.902174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.902653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.902668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.902676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.902852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.903029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.903036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.903043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.905873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.915207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.915665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.915681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.915688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.915864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.916041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.916048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.916054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.918881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.928399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.928874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.928890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.928897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.929072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.929254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.929265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.929271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.932095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.941438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.941887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.941903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.941910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.942086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.942270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.942278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.942284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.945109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.954627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.955081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.955096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.955103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.955286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.955462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.955469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.955476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.958307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.967815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.968288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.968305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.968312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.968488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.968664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.968671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.968678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.971510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.980870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.981238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.981253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.981260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.981437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.981614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.981621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.981627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.984454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:17.993962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:17.994366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:17.994383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:17.994390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.610 [2024-07-15 20:52:17.994566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.610 [2024-07-15 20:52:17.994742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.610 [2024-07-15 20:52:17.994750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.610 [2024-07-15 20:52:17.994756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.610 [2024-07-15 20:52:17.997584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.610 [2024-07-15 20:52:18.007087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.610 [2024-07-15 20:52:18.007456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-07-15 20:52:18.007471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.610 [2024-07-15 20:52:18.007478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.611 [2024-07-15 20:52:18.007654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.611 [2024-07-15 20:52:18.007831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.611 [2024-07-15 20:52:18.007838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.611 [2024-07-15 20:52:18.007845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.611 [2024-07-15 20:52:18.010681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.611 [2024-07-15 20:52:18.020210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.611 [2024-07-15 20:52:18.020699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 20:52:18.020741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.611 [2024-07-15 20:52:18.020763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.611 [2024-07-15 20:52:18.021266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.611 [2024-07-15 20:52:18.021521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.611 [2024-07-15 20:52:18.021531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.611 [2024-07-15 20:52:18.021540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.611 [2024-07-15 20:52:18.025597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.611 [2024-07-15 20:52:18.033621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.611 [2024-07-15 20:52:18.034073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 20:52:18.034088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.611 [2024-07-15 20:52:18.034095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.611 [2024-07-15 20:52:18.034276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.611 [2024-07-15 20:52:18.034453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.611 [2024-07-15 20:52:18.034460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.611 [2024-07-15 20:52:18.034467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.611 [2024-07-15 20:52:18.037271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.611 [2024-07-15 20:52:18.046692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.611 [2024-07-15 20:52:18.047119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 20:52:18.047161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.611 [2024-07-15 20:52:18.047182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.611 [2024-07-15 20:52:18.047775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.611 [2024-07-15 20:52:18.048303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.611 [2024-07-15 20:52:18.048311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.611 [2024-07-15 20:52:18.048317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.611 [2024-07-15 20:52:18.050982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.611 [2024-07-15 20:52:18.059544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.611 [2024-07-15 20:52:18.060023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 20:52:18.060064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.611 [2024-07-15 20:52:18.060085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.611 [2024-07-15 20:52:18.060677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.611 [2024-07-15 20:52:18.061265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.611 [2024-07-15 20:52:18.061290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.611 [2024-07-15 20:52:18.061314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.611 [2024-07-15 20:52:18.063972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.611 [2024-07-15 20:52:18.072346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.611 [2024-07-15 20:52:18.072814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 20:52:18.072829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.611 [2024-07-15 20:52:18.072835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.611 [2024-07-15 20:52:18.072997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.611 [2024-07-15 20:52:18.073159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.611 [2024-07-15 20:52:18.073165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.611 [2024-07-15 20:52:18.073171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.611 [2024-07-15 20:52:18.075858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.611 [2024-07-15 20:52:18.085257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.611 [2024-07-15 20:52:18.085694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-07-15 20:52:18.085709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.611 [2024-07-15 20:52:18.085715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.611 [2024-07-15 20:52:18.085877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.611 [2024-07-15 20:52:18.086056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.611 [2024-07-15 20:52:18.086064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.611 [2024-07-15 20:52:18.086070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.611 [2024-07-15 20:52:18.088804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.870 [2024-07-15 20:52:18.098283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.870 [2024-07-15 20:52:18.098763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 20:52:18.098779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.870 [2024-07-15 20:52:18.098785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.870 [2024-07-15 20:52:18.098957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.870 [2024-07-15 20:52:18.099128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.870 [2024-07-15 20:52:18.099135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.870 [2024-07-15 20:52:18.099142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.870 [2024-07-15 20:52:18.101819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.870 [2024-07-15 20:52:18.111200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.870 [2024-07-15 20:52:18.111670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 20:52:18.111712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.870 [2024-07-15 20:52:18.111734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.870 [2024-07-15 20:52:18.112283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.112460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.112468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.112474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.115250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.124130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.124617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.124658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.124679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.125272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.125629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.125636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.125643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.128308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.136999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.137506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.137522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.137530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.137701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.137873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.137880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.137886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.140716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.150024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.150520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.150535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.150542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.150719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.150898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.150906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.150912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.153683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.163115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.163622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.163665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.163686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.164171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.164366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.164374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.164381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.167163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.175933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.176417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.176432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.176439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.176611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.176782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.176789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.176795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.179492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.188844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.189314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.189355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.189377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.189954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.190552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.190582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.190588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.193256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.201736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.202215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.202266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.202288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.202795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.202967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.202974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.202980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.205609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.214582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.215028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.215069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.215091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.215629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.215801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.215808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.215814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.218479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.227473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.227927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.227941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.227947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.228108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.228292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.228301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.228307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.230971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.240291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.240703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.240742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.240772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.871 [2024-07-15 20:52:18.241363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.871 [2024-07-15 20:52:18.241896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.871 [2024-07-15 20:52:18.241904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.871 [2024-07-15 20:52:18.241910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.871 [2024-07-15 20:52:18.244549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.871 [2024-07-15 20:52:18.253165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.871 [2024-07-15 20:52:18.253624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 20:52:18.253639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.871 [2024-07-15 20:52:18.253646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.872 [2024-07-15 20:52:18.253817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.872 [2024-07-15 20:52:18.253989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.872 [2024-07-15 20:52:18.253996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.872 [2024-07-15 20:52:18.254002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.872 [2024-07-15 20:52:18.256685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.872 [2024-07-15 20:52:18.266111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.872 [2024-07-15 20:52:18.266585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 20:52:18.266627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.872 [2024-07-15 20:52:18.266649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.872 [2024-07-15 20:52:18.267240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.872 [2024-07-15 20:52:18.267790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.872 [2024-07-15 20:52:18.267797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.872 [2024-07-15 20:52:18.267804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.872 [2024-07-15 20:52:18.270544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.872 [2024-07-15 20:52:18.278969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.872 [2024-07-15 20:52:18.279429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 20:52:18.279445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.872 [2024-07-15 20:52:18.279452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.872 [2024-07-15 20:52:18.279624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.872 [2024-07-15 20:52:18.279798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.872 [2024-07-15 20:52:18.279805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.872 [2024-07-15 20:52:18.279811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.872 [2024-07-15 20:52:18.282504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.872 [2024-07-15 20:52:18.291874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.872 [2024-07-15 20:52:18.292344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 20:52:18.292385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.872 [2024-07-15 20:52:18.292406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.872 [2024-07-15 20:52:18.292631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.872 [2024-07-15 20:52:18.292794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.872 [2024-07-15 20:52:18.292800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.872 [2024-07-15 20:52:18.292806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.872 [2024-07-15 20:52:18.295495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.872 [2024-07-15 20:52:18.304788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.872 [2024-07-15 20:52:18.305253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 20:52:18.305270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.872 [2024-07-15 20:52:18.305277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.872 [2024-07-15 20:52:18.305453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.872 [2024-07-15 20:52:18.305630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.872 [2024-07-15 20:52:18.305637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.872 [2024-07-15 20:52:18.305644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.872 [2024-07-15 20:52:18.308485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.872 [2024-07-15 20:52:18.317689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.872 [2024-07-15 20:52:18.318148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 20:52:18.318190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.872 [2024-07-15 20:52:18.318213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.872 [2024-07-15 20:52:18.318805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.872 [2024-07-15 20:52:18.319166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.872 [2024-07-15 20:52:18.319173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.872 [2024-07-15 20:52:18.319180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.872 [2024-07-15 20:52:18.321895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.872 [2024-07-15 20:52:18.330571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.872 [2024-07-15 20:52:18.331046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 20:52:18.331089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.872 [2024-07-15 20:52:18.331113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.872 [2024-07-15 20:52:18.331644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.872 [2024-07-15 20:52:18.331818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.872 [2024-07-15 20:52:18.331828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.872 [2024-07-15 20:52:18.331835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.872 [2024-07-15 20:52:18.334507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.872 [2024-07-15 20:52:18.343473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.872 [2024-07-15 20:52:18.343884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 20:52:18.343899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:43.872 [2024-07-15 20:52:18.343906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:43.872 [2024-07-15 20:52:18.344078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:43.872 [2024-07-15 20:52:18.344255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.872 [2024-07-15 20:52:18.344263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.872 [2024-07-15 20:52:18.344270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.872 [2024-07-15 20:52:18.346932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.138 [2024-07-15 20:52:18.356521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.138 [2024-07-15 20:52:18.356994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.138 [2024-07-15 20:52:18.357035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.138 [2024-07-15 20:52:18.357057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.138 [2024-07-15 20:52:18.357653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.138 [2024-07-15 20:52:18.357874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.138 [2024-07-15 20:52:18.357882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.138 [2024-07-15 20:52:18.357888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.138 [2024-07-15 20:52:18.360659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.138 [2024-07-15 20:52:18.369510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.138 [2024-07-15 20:52:18.369933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.138 [2024-07-15 20:52:18.369949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.138 [2024-07-15 20:52:18.369959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.138 [2024-07-15 20:52:18.370130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.138 [2024-07-15 20:52:18.370308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.138 [2024-07-15 20:52:18.370316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.138 [2024-07-15 20:52:18.370322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.138 [2024-07-15 20:52:18.372985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.138 [2024-07-15 20:52:18.382518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.138 [2024-07-15 20:52:18.382928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.138 [2024-07-15 20:52:18.382965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.138 [2024-07-15 20:52:18.382988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.138 [2024-07-15 20:52:18.383579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.138 [2024-07-15 20:52:18.384104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.138 [2024-07-15 20:52:18.384111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.138 [2024-07-15 20:52:18.384117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.138 [2024-07-15 20:52:18.386844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.138 [2024-07-15 20:52:18.395298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.138 [2024-07-15 20:52:18.395781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.138 [2024-07-15 20:52:18.395797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.138 [2024-07-15 20:52:18.395804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.138 [2024-07-15 20:52:18.395980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.138 [2024-07-15 20:52:18.396158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.138 [2024-07-15 20:52:18.396165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.138 [2024-07-15 20:52:18.396172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.138 [2024-07-15 20:52:18.399000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.408287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.408766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.408781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.408789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.408965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.409142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.409152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.409159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.411916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.421361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.421714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.421729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.421736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.421908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.422080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.422087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.422093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.424890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.434167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.434594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.434611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.434619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.435194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.435372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.435380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.435386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.438051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.447028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.447426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.447442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.447449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.447620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.447791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.447799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.447805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.450571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.459970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.460445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.460486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.460508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.461073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.461249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.461256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.461263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.463984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.472989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.473427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.473468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.473489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.473998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.474160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.474167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.474173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.476857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.485907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.486368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.486382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.486389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.486550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.486712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.486719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.486725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.489420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.498839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.499300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.499341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.499362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.499948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.500457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.500465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.500472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.503136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.511716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.512198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.512252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.512286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.512865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.513387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.513395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.513401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.516117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.524728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.525214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.525233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.525240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.525416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.525592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.525600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.525606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.528431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.139 [2024-07-15 20:52:18.537772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.139 [2024-07-15 20:52:18.538223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.139 [2024-07-15 20:52:18.538277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.139 [2024-07-15 20:52:18.538299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.139 [2024-07-15 20:52:18.538877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.139 [2024-07-15 20:52:18.539409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.139 [2024-07-15 20:52:18.539418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.139 [2024-07-15 20:52:18.539427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.139 [2024-07-15 20:52:18.542250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.140 [2024-07-15 20:52:18.550763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.140 [2024-07-15 20:52:18.551237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.140 [2024-07-15 20:52:18.551278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.140 [2024-07-15 20:52:18.551300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.140 [2024-07-15 20:52:18.551878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.140 [2024-07-15 20:52:18.552466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.140 [2024-07-15 20:52:18.552491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.140 [2024-07-15 20:52:18.552516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.140 [2024-07-15 20:52:18.555218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.140 [2024-07-15 20:52:18.563680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.140 [2024-07-15 20:52:18.564126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.140 [2024-07-15 20:52:18.564167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.140 [2024-07-15 20:52:18.564189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.140 [2024-07-15 20:52:18.564686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.140 [2024-07-15 20:52:18.564860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.140 [2024-07-15 20:52:18.564867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.140 [2024-07-15 20:52:18.564873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.140 [2024-07-15 20:52:18.567574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.140 [2024-07-15 20:52:18.576480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.140 [2024-07-15 20:52:18.576881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.140 [2024-07-15 20:52:18.576895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.140 [2024-07-15 20:52:18.576902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.140 [2024-07-15 20:52:18.577063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.140 [2024-07-15 20:52:18.577231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.140 [2024-07-15 20:52:18.577239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.140 [2024-07-15 20:52:18.577245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.140 [2024-07-15 20:52:18.579922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.140 [2024-07-15 20:52:18.589405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.140 [2024-07-15 20:52:18.589813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.140 [2024-07-15 20:52:18.589831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.140 [2024-07-15 20:52:18.589837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.140 [2024-07-15 20:52:18.589998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.140 [2024-07-15 20:52:18.590160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.140 [2024-07-15 20:52:18.590167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.140 [2024-07-15 20:52:18.590173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.140 [2024-07-15 20:52:18.592850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.140 [2024-07-15 20:52:18.602236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.140 [2024-07-15 20:52:18.602680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.140 [2024-07-15 20:52:18.602722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.140 [2024-07-15 20:52:18.602743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.140 [2024-07-15 20:52:18.603337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.140 [2024-07-15 20:52:18.603689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.140 [2024-07-15 20:52:18.603697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.140 [2024-07-15 20:52:18.603703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.140 [2024-07-15 20:52:18.606372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.140 [2024-07-15 20:52:18.615159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.140 [2024-07-15 20:52:18.615654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.140 [2024-07-15 20:52:18.615696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.140 [2024-07-15 20:52:18.615717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.140 [2024-07-15 20:52:18.616311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.140 [2024-07-15 20:52:18.616795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.140 [2024-07-15 20:52:18.616803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.140 [2024-07-15 20:52:18.616810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.140 [2024-07-15 20:52:18.619510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.398 [2024-07-15 20:52:18.627984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.398 [2024-07-15 20:52:18.628460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.398 [2024-07-15 20:52:18.628503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.398 [2024-07-15 20:52:18.628525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.398 [2024-07-15 20:52:18.629103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.398 [2024-07-15 20:52:18.629703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.398 [2024-07-15 20:52:18.629729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.629748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.632427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.640823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.641259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.641274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.641281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.641442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.641604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.641611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.641617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.644306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.653809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.654266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.654282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.654289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.654465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.654642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.654649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.654656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.657482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.666906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.667384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.667400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.667406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.667578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.667749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.667756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.667763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.670505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.679699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.680180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.680222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.680259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.680786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.680963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.680970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.680977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.683663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.692508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.692977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.693018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.693040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.693632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.693995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.694002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.694009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.696672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.705293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.705747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.705788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.705809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.706402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.706903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.706910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.706916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.709579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.718106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.718587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.718630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.718658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.719252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.719832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.719856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.719876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.723954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.731678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.732156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.732198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.732219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.732717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.732888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.732896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.732902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.735610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.744473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.744923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.744964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.744985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.745579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.746021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.746029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.746035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.748747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.757401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.757874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.757915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.757936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.758528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.759049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.759063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.759069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.761774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.770254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.770618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.770633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.770639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.770801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.770963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.770970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.770976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.773647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.783127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.783605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.783647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.783668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.784172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.784530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.784539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.784545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.787208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.795925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.796384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.796400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.796407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.796578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.796749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.796757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.796763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.799513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.808709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.809161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.809202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.809236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.809814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.810105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.810112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.810118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.812840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.821523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.822000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.822016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.822024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.822195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.822375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.822384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.822394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.825093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.834345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.834815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.834856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.834877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.835303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.835475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.835482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.835488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.838155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.847165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.847552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.847568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.847599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.848149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.848326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.848334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.848340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.851077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.860022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.860470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.399 [2024-07-15 20:52:18.860485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.399 [2024-07-15 20:52:18.860492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.399 [2024-07-15 20:52:18.860663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.399 [2024-07-15 20:52:18.860834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.399 [2024-07-15 20:52:18.860841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.399 [2024-07-15 20:52:18.860847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.399 [2024-07-15 20:52:18.863712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.399 [2024-07-15 20:52:18.872855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.399 [2024-07-15 20:52:18.873300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.400 [2024-07-15 20:52:18.873345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.400 [2024-07-15 20:52:18.873367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.400 [2024-07-15 20:52:18.873945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.400 [2024-07-15 20:52:18.874359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.400 [2024-07-15 20:52:18.874367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.400 [2024-07-15 20:52:18.874373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.400 [2024-07-15 20:52:18.877030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.885784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.886245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.886261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.886268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.886447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.886609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.886616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.886626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.889320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.898685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.899060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.899075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.899082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.899276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.899453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.899461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.899467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.902160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.911488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.911935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.911950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.911957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.912128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.912323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.912331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.912338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.915160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.924632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.925099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.925140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.925161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.925752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.926341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.926365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.926386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.929182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.937628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.938090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.938105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.938112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.938304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.938482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.938489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.938495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.941272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.950539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.951002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.951043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.951064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.951557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.951729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.951737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.951743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.954487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.963416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.963871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.963886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.963893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.964064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.964241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.964249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.964255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.966921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.976291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.976746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.976761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.976768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.976944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.977116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.977123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.977129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.979807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:18.989165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:18.989603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:18.989641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:18.989663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:18.990255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.659 [2024-07-15 20:52:18.990440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.659 [2024-07-15 20:52:18.990448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.659 [2024-07-15 20:52:18.990454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.659 [2024-07-15 20:52:18.993117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.659 [2024-07-15 20:52:19.002077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.659 [2024-07-15 20:52:19.002532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.659 [2024-07-15 20:52:19.002575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.659 [2024-07-15 20:52:19.002597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.659 [2024-07-15 20:52:19.003175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.003712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.003720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.003726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.007744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.015685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.016147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.016162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.016169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.016346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.016518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.016525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.016534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.019242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.028763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.029181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.029197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.029204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.029387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.029564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.029571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.029578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.032397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.041896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.042348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.042364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.042371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.042547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.042723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.042731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.042738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.045564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.055069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.055526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.055541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.055548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.055724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.055901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.055908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.055914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.058743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.068262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.068733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.068752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.068759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.068935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.069112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.069120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.069127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.071959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.081458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.081920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.081962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.081983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.082572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.082981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.082989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.082995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.085827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.094448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.094960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.095001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.095023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.095615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.095833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.095841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.095847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.098572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.107367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.107782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.107797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.107803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.107974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.108149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.108157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.108163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.110899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.120253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.120713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.120728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.120734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.120905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.121077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.121084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.121090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.123781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.660 [2024-07-15 20:52:19.133096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.660 [2024-07-15 20:52:19.133516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.660 [2024-07-15 20:52:19.133531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.660 [2024-07-15 20:52:19.133538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.660 [2024-07-15 20:52:19.133709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.660 [2024-07-15 20:52:19.133882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.660 [2024-07-15 20:52:19.133889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.660 [2024-07-15 20:52:19.133895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.660 [2024-07-15 20:52:19.136610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.920 [2024-07-15 20:52:19.145996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.920 [2024-07-15 20:52:19.146397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.920 [2024-07-15 20:52:19.146414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.920 [2024-07-15 20:52:19.146421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.920 [2024-07-15 20:52:19.146592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.920 [2024-07-15 20:52:19.146764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.920 [2024-07-15 20:52:19.146771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.920 [2024-07-15 20:52:19.146778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.920 [2024-07-15 20:52:19.149458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.920 [2024-07-15 20:52:19.158843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.920 [2024-07-15 20:52:19.159346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.920 [2024-07-15 20:52:19.159389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.920 [2024-07-15 20:52:19.159411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.920 [2024-07-15 20:52:19.159963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.920 [2024-07-15 20:52:19.160135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.920 [2024-07-15 20:52:19.160142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.920 [2024-07-15 20:52:19.160149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.920 [2024-07-15 20:52:19.162886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.920 [2024-07-15 20:52:19.171982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.920 [2024-07-15 20:52:19.172385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.920 [2024-07-15 20:52:19.172401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.920 [2024-07-15 20:52:19.172408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.920 [2024-07-15 20:52:19.172584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.920 [2024-07-15 20:52:19.172761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.920 [2024-07-15 20:52:19.172768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.920 [2024-07-15 20:52:19.172775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.920 [2024-07-15 20:52:19.175554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.920 [2024-07-15 20:52:19.185120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.920 [2024-07-15 20:52:19.185455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.920 [2024-07-15 20:52:19.185471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.920 [2024-07-15 20:52:19.185478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.920 [2024-07-15 20:52:19.185655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.920 [2024-07-15 20:52:19.185832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.920 [2024-07-15 20:52:19.185839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.920 [2024-07-15 20:52:19.185846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.920 [2024-07-15 20:52:19.188685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.920 [2024-07-15 20:52:19.198197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.920 [2024-07-15 20:52:19.198601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.920 [2024-07-15 20:52:19.198616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.920 [2024-07-15 20:52:19.198626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.920 [2024-07-15 20:52:19.198803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.920 [2024-07-15 20:52:19.198980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.920 [2024-07-15 20:52:19.198987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.920 [2024-07-15 20:52:19.198994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.920 [2024-07-15 20:52:19.201819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.920 [2024-07-15 20:52:19.211340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.920 [2024-07-15 20:52:19.211803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.920 [2024-07-15 20:52:19.211818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.920 [2024-07-15 20:52:19.211825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.920 [2024-07-15 20:52:19.212018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.920 [2024-07-15 20:52:19.212200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.920 [2024-07-15 20:52:19.212208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.920 [2024-07-15 20:52:19.212215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.920 [2024-07-15 20:52:19.215058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.920 [2024-07-15 20:52:19.224414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.920 [2024-07-15 20:52:19.224885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.920 [2024-07-15 20:52:19.224900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.920 [2024-07-15 20:52:19.224907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.920 [2024-07-15 20:52:19.225083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.920 [2024-07-15 20:52:19.225267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.920 [2024-07-15 20:52:19.225275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.920 [2024-07-15 20:52:19.225282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.920 [2024-07-15 20:52:19.228105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.920 [2024-07-15 20:52:19.237639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.920 [2024-07-15 20:52:19.238089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.921 [2024-07-15 20:52:19.238104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.921 [2024-07-15 20:52:19.238111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.921 [2024-07-15 20:52:19.238292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.921 [2024-07-15 20:52:19.238469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.921 [2024-07-15 20:52:19.238479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.921 [2024-07-15 20:52:19.238486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.921 [2024-07-15 20:52:19.241366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2839741 Killed "${NVMF_APP[@]}" "$@" 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.921 [2024-07-15 20:52:19.250720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.921 [2024-07-15 20:52:19.251192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.921 [2024-07-15 20:52:19.251208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.921 [2024-07-15 20:52:19.251215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.921 [2024-07-15 20:52:19.251397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.921 [2024-07-15 20:52:19.251574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.921 [2024-07-15 20:52:19.251582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.921 [2024-07-15 20:52:19.251588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.921 [2024-07-15 20:52:19.254424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2841145 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2841145 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2841145 ']' 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:44.921 20:52:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.921 [2024-07-15 20:52:19.263778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.921 [2024-07-15 20:52:19.264245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.921 [2024-07-15 20:52:19.264262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.921 [2024-07-15 20:52:19.264269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.921 [2024-07-15 20:52:19.264445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.921 [2024-07-15 20:52:19.264622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.921 [2024-07-15 20:52:19.264635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.921 [2024-07-15 20:52:19.264642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.921 [2024-07-15 20:52:19.267480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.921 [2024-07-15 20:52:19.276830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.921 [2024-07-15 20:52:19.277299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.921 [2024-07-15 20:52:19.277315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.921 [2024-07-15 20:52:19.277323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.921 [2024-07-15 20:52:19.277499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.921 [2024-07-15 20:52:19.277677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.921 [2024-07-15 20:52:19.277685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.921 [2024-07-15 20:52:19.277691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.921 [2024-07-15 20:52:19.280523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.921 [2024-07-15 20:52:19.289886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.921 [2024-07-15 20:52:19.290351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.921 [2024-07-15 20:52:19.290367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.921 [2024-07-15 20:52:19.290374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.921 [2024-07-15 20:52:19.290551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.921 [2024-07-15 20:52:19.290729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.921 [2024-07-15 20:52:19.290737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.921 [2024-07-15 20:52:19.290744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.921 [2024-07-15 20:52:19.293546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.921 [2024-07-15 20:52:19.301277] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:26:44.921 [2024-07-15 20:52:19.301317] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.921 [2024-07-15 20:52:19.303077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.921 [2024-07-15 20:52:19.303481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.921 [2024-07-15 20:52:19.303497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.921 [2024-07-15 20:52:19.303505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.921 [2024-07-15 20:52:19.303682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.921 [2024-07-15 20:52:19.303859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.921 [2024-07-15 20:52:19.303867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.921 [2024-07-15 20:52:19.303877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.921 [2024-07-15 20:52:19.306707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.921 [2024-07-15 20:52:19.316195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.921 [2024-07-15 20:52:19.316602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.921 [2024-07-15 20:52:19.316621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.921 [2024-07-15 20:52:19.316630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.921 [2024-07-15 20:52:19.316809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.921 [2024-07-15 20:52:19.316986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.922 [2024-07-15 20:52:19.316995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.922 [2024-07-15 20:52:19.317001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.922 [2024-07-15 20:52:19.319795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.922 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.922 [2024-07-15 20:52:19.329354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.922 [2024-07-15 20:52:19.329771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.922 [2024-07-15 20:52:19.329788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.922 [2024-07-15 20:52:19.329795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.922 [2024-07-15 20:52:19.329972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.922 [2024-07-15 20:52:19.330150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.922 [2024-07-15 20:52:19.330157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.922 [2024-07-15 20:52:19.330164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.922 [2024-07-15 20:52:19.333003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.922 [2024-07-15 20:52:19.342467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.922 [2024-07-15 20:52:19.342877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.922 [2024-07-15 20:52:19.342893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.922 [2024-07-15 20:52:19.342901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.922 [2024-07-15 20:52:19.343077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.922 [2024-07-15 20:52:19.343257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.922 [2024-07-15 20:52:19.343266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.922 [2024-07-15 20:52:19.343273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.922 [2024-07-15 20:52:19.346095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.922 [2024-07-15 20:52:19.355614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.922 [2024-07-15 20:52:19.356081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.922 [2024-07-15 20:52:19.356096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.922 [2024-07-15 20:52:19.356103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.922 [2024-07-15 20:52:19.356284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.922 [2024-07-15 20:52:19.356461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.922 [2024-07-15 20:52:19.356469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.922 [2024-07-15 20:52:19.356475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.922 [2024-07-15 20:52:19.359258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.922 [2024-07-15 20:52:19.361058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:44.922 [2024-07-15 20:52:19.368716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.922 [2024-07-15 20:52:19.369183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.922 [2024-07-15 20:52:19.369200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.922 [2024-07-15 20:52:19.369208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.922 [2024-07-15 20:52:19.369405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.922 [2024-07-15 20:52:19.369582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.922 [2024-07-15 20:52:19.369590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.922 [2024-07-15 20:52:19.369597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.922 [2024-07-15 20:52:19.372427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.922 [2024-07-15 20:52:19.381816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.922 [2024-07-15 20:52:19.382314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.922 [2024-07-15 20:52:19.382330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.922 [2024-07-15 20:52:19.382337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.922 [2024-07-15 20:52:19.382509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.922 [2024-07-15 20:52:19.382680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.922 [2024-07-15 20:52:19.382688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.922 [2024-07-15 20:52:19.382695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.922 [2024-07-15 20:52:19.385493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.922 [2024-07-15 20:52:19.394906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.922 [2024-07-15 20:52:19.395379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.922 [2024-07-15 20:52:19.395395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:44.922 [2024-07-15 20:52:19.395402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:44.922 [2024-07-15 20:52:19.395587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:44.922 [2024-07-15 20:52:19.395765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.922 [2024-07-15 20:52:19.395773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.922 [2024-07-15 20:52:19.395779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.922 [2024-07-15 20:52:19.398557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.181 [2024-07-15 20:52:19.407919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.181 [2024-07-15 20:52:19.408286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.181 [2024-07-15 20:52:19.408308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.181 [2024-07-15 20:52:19.408316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.181 [2024-07-15 20:52:19.408494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.181 [2024-07-15 20:52:19.408672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.181 [2024-07-15 20:52:19.408681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.181 [2024-07-15 20:52:19.408688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.411469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.421030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.421388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.421405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.421413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.421589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.421769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.421776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.421783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.424616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.434122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.434445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.434461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.434468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.434645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.434822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.434829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.434840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.437669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.443343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.182 [2024-07-15 20:52:19.443370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.182 [2024-07-15 20:52:19.443377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.182 [2024-07-15 20:52:19.443384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.182 [2024-07-15 20:52:19.443389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.182 [2024-07-15 20:52:19.443485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.182 [2024-07-15 20:52:19.443552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.182 [2024-07-15 20:52:19.443553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.182 [2024-07-15 20:52:19.447197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.447620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.447637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.447645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.447823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.448001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.448010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.448017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.450851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.460395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.460735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.460753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.460761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.460940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.461120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.461130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.461138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.463965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.473480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.473875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.473893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.473901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.474084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.474266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.474275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.474282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.477101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.486627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.487109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.487128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.487135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.487320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.487499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.487507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.487514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.490352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.499697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.500184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.500203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.500211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.500395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.500572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.500581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.500588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.503417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.512747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.513198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.513214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.513221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.513404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.513581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.513589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.513601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.516425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.525924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.526379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.526395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.182 [2024-07-15 20:52:19.526403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.182 [2024-07-15 20:52:19.526580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.182 [2024-07-15 20:52:19.526758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.182 [2024-07-15 20:52:19.526767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.182 [2024-07-15 20:52:19.526773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.182 [2024-07-15 20:52:19.529592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.182 [2024-07-15 20:52:19.539085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.182 [2024-07-15 20:52:19.539484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.182 [2024-07-15 20:52:19.539500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.539507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.539684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.539862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.539870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.539876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.542699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.552210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.552663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.552679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.552686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.552862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.553039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.553047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.553053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.555876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.565400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.565872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.565887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.565893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.566070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.566251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.566260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.566266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.569084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.578597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.578996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.579011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.579019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.579196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.579378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.579387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.579393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.582218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.591739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.592187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.592203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.592210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.592391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.592568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.592577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.592583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.595418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.604930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.605370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.605387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.605394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.605571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.605751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.605759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.605765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.608590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.618098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.618579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.618595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.618602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.618779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.618955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.618963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.618970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.621799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.631143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.631616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.631632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.631639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.631816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.631992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.632000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.632006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.634831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.644319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.644795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.644810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.644817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.644994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.645171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.645179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.645186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.183 [2024-07-15 20:52:19.648015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.183 [2024-07-15 20:52:19.657354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.183 [2024-07-15 20:52:19.657751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.183 [2024-07-15 20:52:19.657766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.183 [2024-07-15 20:52:19.657773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.183 [2024-07-15 20:52:19.657950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.183 [2024-07-15 20:52:19.658127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.183 [2024-07-15 20:52:19.658135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.183 [2024-07-15 20:52:19.658142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.184 [2024-07-15 20:52:19.660968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.670475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.670922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.670938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.670945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.671122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.671305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.671314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.671321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.674143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.683660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.684133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.684149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.684156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.684337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.684515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.684523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.684529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.687356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.696694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.697145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.697164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.697170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.697352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.697530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.697538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.697544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.700370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.709877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.710349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.710365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.710372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.710549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.710726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.710733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.710740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.713567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.722915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.723385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.723401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.723407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.723584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.723762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.723770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.723776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.726598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.736096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.736575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.736591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.736598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.736775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.736956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.736964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.736970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.739793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.749292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.749642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.749657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.749665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.749842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.750018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.750026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.750032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.752857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.762369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.762856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.762872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.762879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.763056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.763237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.763245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.763252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.766074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.775422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.775813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.775829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.775836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.776013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.776190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.776198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.776205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.779031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.788562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.789049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.789064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.789071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.789252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.789430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.789438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.789444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.792267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.801615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.802094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.802109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.802116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.802296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.802473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.802481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.802488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.805319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.814663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.815148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.815164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.815171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.815352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.815529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.815537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.815544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.818370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.827707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.828185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.828201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.828211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.828392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.828570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.828578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.828584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.831406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.840734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.841213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.841233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.841240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.841417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.841594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.841602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.841608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.844433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.853769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.854249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.854265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.443 [2024-07-15 20:52:19.854272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.443 [2024-07-15 20:52:19.854449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.443 [2024-07-15 20:52:19.854625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.443 [2024-07-15 20:52:19.854633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.443 [2024-07-15 20:52:19.854639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.443 [2024-07-15 20:52:19.857469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.443 [2024-07-15 20:52:19.866841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.443 [2024-07-15 20:52:19.867298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 20:52:19.867315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.444 [2024-07-15 20:52:19.867322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.444 [2024-07-15 20:52:19.867499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.444 [2024-07-15 20:52:19.867676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.444 [2024-07-15 20:52:19.867687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.444 [2024-07-15 20:52:19.867694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.444 [2024-07-15 20:52:19.870500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.444 [2024-07-15 20:52:19.880006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.444 [2024-07-15 20:52:19.880480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 20:52:19.880496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.444 [2024-07-15 20:52:19.880504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.444 [2024-07-15 20:52:19.880680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.444 [2024-07-15 20:52:19.880858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.444 [2024-07-15 20:52:19.880866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.444 [2024-07-15 20:52:19.880873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.444 [2024-07-15 20:52:19.883662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.444 [2024-07-15 20:52:19.893197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.444 [2024-07-15 20:52:19.893669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 20:52:19.893685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.444 [2024-07-15 20:52:19.893692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.444 [2024-07-15 20:52:19.893869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.444 [2024-07-15 20:52:19.894046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.444 [2024-07-15 20:52:19.894054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.444 [2024-07-15 20:52:19.894060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.444 [2024-07-15 20:52:19.896883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.444 [2024-07-15 20:52:19.906226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.444 [2024-07-15 20:52:19.906682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 20:52:19.906698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.444 [2024-07-15 20:52:19.906705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.444 [2024-07-15 20:52:19.906881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.444 [2024-07-15 20:52:19.907059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.444 [2024-07-15 20:52:19.907067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.444 [2024-07-15 20:52:19.907074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.444 [2024-07-15 20:52:19.909902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.444 [2024-07-15 20:52:19.919407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.444 [2024-07-15 20:52:19.919815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 20:52:19.919830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.444 [2024-07-15 20:52:19.919837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.444 [2024-07-15 20:52:19.920013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.444 [2024-07-15 20:52:19.920190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.444 [2024-07-15 20:52:19.920199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.444 [2024-07-15 20:52:19.920205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.444 [2024-07-15 20:52:19.923036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:19.932536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:19.933031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:19.933046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:19.933053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:19.933236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:19.933415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:19.933424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:19.933430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:19.936257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:19.945589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:19.946061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:19.946077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:19.946084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:19.946266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:19.946443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:19.946451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:19.946457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:19.949282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:19.958636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:19.959043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:19.959058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:19.959065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:19.959250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:19.959428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:19.959436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:19.959442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:19.962269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:19.971770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:19.972248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:19.972265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:19.972272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:19.972448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:19.972626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:19.972634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:19.972640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:19.975465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:19.984805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:19.985281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:19.985297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:19.985304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:19.985481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:19.985658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:19.985666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:19.985674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:19.988525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:19.997858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:19.998253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:19.998269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:19.998276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:19.998453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:19.998631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:19.998638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:19.998648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:20.001523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:20.010943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:20.011365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:20.011383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:20.011390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:20.011568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:20.011745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:20.011754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:20.011760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:20.014594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:20.024098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:20.024558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:20.024575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:20.024582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:20.024759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:20.025025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:20.025037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:20.025045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:20.028842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:20.037186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:20.037683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:20.037701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:20.037708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:20.037885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.705 [2024-07-15 20:52:20.038062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.705 [2024-07-15 20:52:20.038071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.705 [2024-07-15 20:52:20.038077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.705 [2024-07-15 20:52:20.040910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.705 [2024-07-15 20:52:20.050256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.705 [2024-07-15 20:52:20.050638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.705 [2024-07-15 20:52:20.050654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.705 [2024-07-15 20:52:20.050662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.705 [2024-07-15 20:52:20.050838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.051017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.051026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.051032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.053859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 [2024-07-15 20:52:20.063382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.063753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.063769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.063777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.063954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.064131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.064140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.064146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.066998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 [2024-07-15 20:52:20.076499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.076980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.076997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.077004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.077181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.077365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.077373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.077380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.080203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 [2024-07-15 20:52:20.089555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.090034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.090050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.090057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.090239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.090419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.090427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.090434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.093262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 [2024-07-15 20:52:20.102596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.103070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.103086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.103093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.103275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.103453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.103461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.103467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.106295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 [2024-07-15 20:52:20.115639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.116117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.116133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.116140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.116320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.116499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.116507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.116513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:45.706 [2024-07-15 20:52:20.119343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.706 [2024-07-15 20:52:20.128697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.129103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.129120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.129127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.129309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.129489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.129499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.129505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.132333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 [2024-07-15 20:52:20.141830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.142212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.142233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.142240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.142417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.142598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.142606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.142612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.145440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.706 [2024-07-15 20:52:20.154939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.155388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.155404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.155413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.155590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.155768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.155777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.155783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.158610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 [2024-07-15 20:52:20.160261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.706 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.706 [2024-07-15 20:52:20.168124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.168583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.706 [2024-07-15 20:52:20.168602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.706 [2024-07-15 20:52:20.168609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.706 [2024-07-15 20:52:20.168786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.706 [2024-07-15 20:52:20.168963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.706 [2024-07-15 20:52:20.168971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.706 [2024-07-15 20:52:20.168977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.706 [2024-07-15 20:52:20.171805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.706 [2024-07-15 20:52:20.181312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.706 [2024-07-15 20:52:20.181755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.707 [2024-07-15 20:52:20.181771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.707 [2024-07-15 20:52:20.181778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.707 [2024-07-15 20:52:20.181956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.707 [2024-07-15 20:52:20.182132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.707 [2024-07-15 20:52:20.182141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.707 [2024-07-15 20:52:20.182148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.707 [2024-07-15 20:52:20.184974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.966 [2024-07-15 20:52:20.194506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.966 Malloc0 00:26:45.966 [2024-07-15 20:52:20.194994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-07-15 20:52:20.195011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.966 [2024-07-15 20:52:20.195019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.966 [2024-07-15 20:52:20.195196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.966 [2024-07-15 20:52:20.195379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.966 [2024-07-15 20:52:20.195389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.966 [2024-07-15 20:52:20.195396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.966 [2024-07-15 20:52:20.198217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:45.966 [2024-07-15 20:52:20.207558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.966 [2024-07-15 20:52:20.208039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-07-15 20:52:20.208055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc980 with addr=10.0.0.2, port=4420 00:26:45.966 [2024-07-15 20:52:20.208062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc980 is same with the state(5) to be set 00:26:45.966 [2024-07-15 20:52:20.208243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc980 (9): Bad file descriptor 00:26:45.966 [2024-07-15 20:52:20.208420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.966 [2024-07-15 20:52:20.208428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.966 [2024-07-15 20:52:20.208434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.966 [2024-07-15 20:52:20.211261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.966 [2024-07-15 20:52:20.218503] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.966 [2024-07-15 20:52:20.220601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.966 20:52:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2840113 00:26:45.966 [2024-07-15 20:52:20.251736] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:56.002 00:26:56.002 Latency(us) 00:26:56.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.002 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:56.002 Verification LBA range: start 0x0 length 0x4000 00:26:56.002 Nvme1n1 : 15.01 8121.87 31.73 12473.85 0.00 6194.89 651.80 20743.57 00:26:56.002 =================================================================================================================== 00:26:56.002 Total : 8121.87 31.73 12473.85 0.00 6194.89 651.80 20743.57 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.002 rmmod nvme_tcp 00:26:56.002 rmmod nvme_fabrics 00:26:56.002 rmmod nvme_keyring 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2841145 ']' 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2841145 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2841145 ']' 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2841145 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2841145 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2841145' 00:26:56.002 killing process with pid 2841145 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2841145 00:26:56.002 20:52:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2841145 00:26:56.002 20:52:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.002 20:52:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.002 20:52:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.002 20:52:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.002 20:52:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.002 20:52:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.002 20:52:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.002 20:52:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.939 20:52:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.939 00:26:56.939 real 0m26.071s 00:26:56.939 user 1m2.611s 00:26:56.939 sys 0m6.236s 00:26:56.939 20:52:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.939 20:52:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.939 ************************************ 00:26:56.939 END TEST nvmf_bdevperf 00:26:56.939 ************************************ 00:26:56.939 20:52:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:56.939 20:52:31 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:56.939 20:52:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:56.939 20:52:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.939 20:52:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.939 ************************************ 00:26:56.939 START TEST nvmf_target_disconnect 00:26:56.939 ************************************ 00:26:56.939 20:52:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:56.939 * Looking for test storage... 00:26:56.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.939 20:52:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.939 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:56.939 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.939 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.940 20:52:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:02.213 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:02.213 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:02.213 Found net devices under 0000:86:00.0: cvl_0_0 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:02.213 Found net devices under 0000:86:00.1: cvl_0_1 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.213 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:02.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:27:02.473 00:27:02.473 --- 10.0.0.2 ping statistics --- 00:27:02.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.473 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:27:02.473 00:27:02.473 --- 10.0.0.1 ping statistics --- 00:27:02.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.473 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.473 ************************************ 00:27:02.473 START TEST nvmf_target_disconnect_tc1 00:27:02.473 ************************************ 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.473 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.732 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.732 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.732 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.732 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.732 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.732 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:02.732 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:02.732 20:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.732 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.732 [2024-07-15 20:52:37.039444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.732 [2024-07-15 20:52:37.039544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f4e60 with addr=10.0.0.2, port=4420 00:27:02.732 [2024-07-15 20:52:37.039590] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:02.732 [2024-07-15 20:52:37.039614] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:02.732 [2024-07-15 20:52:37.039633] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:02.732 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:02.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:02.732 Initializing NVMe Controllers 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:02.732 00:27:02.732 real 0m0.096s 00:27:02.732 user 0m0.043s 00:27:02.732 sys 0m0.053s 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 ************************************ 00:27:02.732 END TEST nvmf_target_disconnect_tc1 00:27:02.732 ************************************ 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 ************************************ 00:27:02.732 START TEST nvmf_target_disconnect_tc2 00:27:02.732 ************************************ 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2846199 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2846199 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2846199 ']' 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.732 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 [2024-07-15 20:52:37.168600] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:02.732 [2024-07-15 20:52:37.168638] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.732 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.990 [2024-07-15 20:52:37.236103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:02.990 [2024-07-15 20:52:37.315244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.990 [2024-07-15 20:52:37.315296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.990 [2024-07-15 20:52:37.315303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.990 [2024-07-15 20:52:37.315309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.991 [2024-07-15 20:52:37.315314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.991 [2024-07-15 20:52:37.315876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:02.991 [2024-07-15 20:52:37.315894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:02.991 [2024-07-15 20:52:37.315924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:02.991 [2024-07-15 20:52:37.315926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:03.558 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.558 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:03.558 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:03.558 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.558 20:52:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.558 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.558 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:03.558 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.558 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.558 Malloc0 00:27:03.558 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.558 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:03.558 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.817 [2024-07-15 20:52:38.046441] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.817 [2024-07-15 20:52:38.074673] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2846446 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:03.817 20:52:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:03.817 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.727 20:52:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2846199 00:27:05.727 20:52:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:05.727 Read completed with error (sct=0, sc=8) 00:27:05.727 starting I/O failed 00:27:05.727 Read completed with error (sct=0, sc=8) 00:27:05.727 starting I/O failed 00:27:05.727 Read completed with error (sct=0, sc=8) 00:27:05.727 starting I/O failed 00:27:05.727 Read completed with error (sct=0, sc=8) 00:27:05.727 starting I/O failed 00:27:05.727 Read completed with error (sct=0, sc=8) 00:27:05.727 starting I/O failed 00:27:05.727 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 [2024-07-15 20:52:40.101694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 [2024-07-15 20:52:40.101903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 [2024-07-15 20:52:40.102099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Write completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.728 starting I/O failed 00:27:05.728 Read completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Read completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Read completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Read completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Read completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Read completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Read completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Write completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 Read completed with error (sct=0, sc=8) 00:27:05.729 starting I/O failed 00:27:05.729 [2024-07-15 20:52:40.102297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.729 [2024-07-15 20:52:40.102500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.102517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.102715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.102726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.102992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.103002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.103274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.103285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.103409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.103419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.103662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.103672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.103916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.103961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.104282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.104316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.104556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.104586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.104837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.104867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.105094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.105123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.105396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.105406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.105593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.105603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.105781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.105790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.106105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.106135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.106452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.106483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.106656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.106685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.106906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.106936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.107215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.107231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.107448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.107457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.107651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.107661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.107929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.107939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.108206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.108216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.108411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.108421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.108707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.108736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.109078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.109107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.109392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.109402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.109596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.109609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.109790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.109804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.109995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.110008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.110203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.110217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.110486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.110500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.110658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.110671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.110978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.110993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.729 qpair failed and we were unable to recover it. 00:27:05.729 [2024-07-15 20:52:40.111246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.729 [2024-07-15 20:52:40.111261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.111403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.111416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.111696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.111710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.111986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.112000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.112289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.112303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.112554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.112567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.112765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.112779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.113114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.113128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.113353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.113366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.113583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.113596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.113868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.113881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.114057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.114071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.114378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.114392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.114596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.114610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.114897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.114911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.115153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.115166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.115443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.115457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.115729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.115742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.116015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.116028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.116223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.116243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.116488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.116502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.116794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.116807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.117063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.117077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.117268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.117282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.117578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.117591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.117781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.117794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.118082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.118096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.118314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.118328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.118457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.118471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.118716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.118730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.118910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.118924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.119120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.119149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.119323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.119354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.119634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.119664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.119940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.119969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.120308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.120338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.120583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.120613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.120881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.120910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.121160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.121173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.121442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.730 [2024-07-15 20:52:40.121459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.730 qpair failed and we were unable to recover it. 00:27:05.730 [2024-07-15 20:52:40.121679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.121693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.121959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.121973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.122266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.122280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.122524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.122538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.122716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.122729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.123012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.123026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.123241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.123255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.123527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.123541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.123680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.123694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.123831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.123844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.124094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.124107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.124352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.124366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.124631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.124644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.124914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.124928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.125106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.125119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.125324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.125338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.125584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.125597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.125882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.125896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.126179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.126192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.126447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.126461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.126638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.126652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.126842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.126855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.126994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.127007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.127212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.127253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.127462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.127491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.127794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.127824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.128074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.128104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.128404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.128435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.128628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.128657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.128908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.128937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.129212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.129249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.129569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.129582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.129729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.129742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.129988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.130002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.130316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.130348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.130655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.130685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.130987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.131016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.131256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.131293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.131554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.131567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.131785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.131801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.731 [2024-07-15 20:52:40.132067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.731 [2024-07-15 20:52:40.132081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.731 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.132325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.132339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.132516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.132530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.132784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.132813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.133113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.133126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.133320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.133334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.133525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.133553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.133775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.133804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.134115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.134144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.134433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.134464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.134622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.134650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.134949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.134978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.135279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.135293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.135543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.135557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.135824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.135838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.136081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.136094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.136359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.136373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.136529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.136543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.136731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.136761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.137042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.137072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.137366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.137380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.137575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.137588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.137764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.137777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.138042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.138071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.138345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.138359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.138561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.138575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.138785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.138799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.139042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.139057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.139363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.139393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.139607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.139637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.139961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.139997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.140239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.140253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.140433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.140447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.140731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.140745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.732 [2024-07-15 20:52:40.140935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.732 [2024-07-15 20:52:40.140963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.732 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.141198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.141236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.141545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.141575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.141865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.141894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.142134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.142162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.142392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.142427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.142728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.142758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.143059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.143088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.143396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.143426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.143655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.143684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.143906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.143935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.144245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.144298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.144544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.144558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.144738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.144751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.145044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.145074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.145307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.145338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.145623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.145653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.145902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.145932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.146165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.146194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.146516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.146542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.146790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.146801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.147059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.147069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.147328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.147338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.147565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.147575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.147840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.147850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.148053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.148063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.148271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.148300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.148591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.148621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.148845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.148874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.149122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.149132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.149332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.149342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.149598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.149608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.149873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.149883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.150067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.150077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.150338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.150368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.150593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.150622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.150874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.150903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.151133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.151162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.151444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.151454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.151691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.733 [2024-07-15 20:52:40.151701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.733 qpair failed and we were unable to recover it. 00:27:05.733 [2024-07-15 20:52:40.151936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.151946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.152230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.152240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.152421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.152431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.152691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.152700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.152965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.152994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.153295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.153331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.153558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.153569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.153860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.153870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.154107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.154117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.154353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.154364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.154634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.154644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.154815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.154825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.155014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.155024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.155200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.155210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.155454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.155484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.155702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.155732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.156016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.156045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.156257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.156288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.156491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.156501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.156784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.156794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.157032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.157042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.157215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.157228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.157463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.157473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.157605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.157615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.157785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.157795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.157988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.157998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.158242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.158272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.158561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.158591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.158752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.158782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.159001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.159031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.159308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.159338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.159498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.159508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.159756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.159786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.160100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.160130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.160359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.160393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.160650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.160660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.160944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.160954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.161190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.161199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.161436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.734 [2024-07-15 20:52:40.161446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.734 qpair failed and we were unable to recover it. 00:27:05.734 [2024-07-15 20:52:40.161677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.161686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.161936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.161945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.162181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.162190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.162435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.162446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.162623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.162633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.162868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.162897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.163216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.163262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.163529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.163538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.163783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.163792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.164072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.164082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.164320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.164330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.164567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.164577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.164824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.164834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.165095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.165105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.165365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.165375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.165613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.165623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.165880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.165889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.166018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.166027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.166268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.166299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.166599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.166627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.166942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.166971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.167201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.167211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.167448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.167458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.167717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.167726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.167988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.167997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.168176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.168186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.168434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.168465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.168745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.168774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.169104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.169133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.169422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.169452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.169732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.169761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.170085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.170114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.170398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.170408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.170596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.170606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.170775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.170785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.171073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.171102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.171352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.171381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.171624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.735 [2024-07-15 20:52:40.171653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.735 qpair failed and we were unable to recover it. 00:27:05.735 [2024-07-15 20:52:40.171892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.171921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.172214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.172223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.172362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.172371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.172607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.172616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.172832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.172841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.173102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.173111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.173313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.173323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.173582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.173592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.173762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.173774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.174042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.174071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.174395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.174425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.174708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.174718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.174980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.174989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.175239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.175249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.175445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.175454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.175624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.175634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.175904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.175933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.176230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.176240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.176420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.176430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.176688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.176698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.176876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.176886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.177193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.177223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.177490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.177520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.177817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.177846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.178091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.178119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.178420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.178455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.178686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.178696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.178924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.178934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.179143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.179152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.179341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.179351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.179549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.179575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.179851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.179880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.180156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.180186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.180413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.736 [2024-07-15 20:52:40.180444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.736 qpair failed and we were unable to recover it. 00:27:05.736 [2024-07-15 20:52:40.180722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.180732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.180906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.180916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.181121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.181150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.181376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.181406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.181708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.181737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.182025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.182054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.182370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.182380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.182624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.182653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.182907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.182936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.183156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.183185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.183472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.183482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.183741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.183750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.184007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.184016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.184255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.184265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.184526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.184537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.184790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.184800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.184924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.184934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.185060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.185070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.185327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.185337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.185472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.185482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.185713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.185722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.185993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.186022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.186322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.186353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.186665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.186694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.186990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.187019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.187322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.187332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.187574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.187583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.187846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.187855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.188092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.188102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.188271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.188281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.188555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.188565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.188798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.188808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.188977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.188987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.189200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.189210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.189467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.189477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.189730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.189740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.190009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.190019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.190200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.190210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.190408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.190418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.190691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.190720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.737 [2024-07-15 20:52:40.191033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.737 [2024-07-15 20:52:40.191062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.737 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.191358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.191369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.191646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.191655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.191910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.191920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.192204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.192213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.192453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.192463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.192646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.192656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.192850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.192879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.193180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.193209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.193518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.193549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.193805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.193834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.194169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.194198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.194579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.194647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.194902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.194936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.195164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.195204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.195495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.195509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.195695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.195708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.195976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.195990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.196131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.196167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.196481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.196512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.196812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.196843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.197152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.197181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.197479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.197493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.197770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.197784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.198079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.198093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.198361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.198376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.198592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.198606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.198812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.198826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.199032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.199047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.199241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.199256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.199509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.199540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.199858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.199888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.200111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.200142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.200422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.200452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.200775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.200789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.201042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.201057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.201317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.201331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.738 [2024-07-15 20:52:40.201583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.738 [2024-07-15 20:52:40.201597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.738 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.201794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.201808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.202096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.202125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.202300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.202315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.202531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.202561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.202914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.202944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.203247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.203278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.203511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.203541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.203818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.203832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.203973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.203987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:05.739 [2024-07-15 20:52:40.204202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.739 [2024-07-15 20:52:40.204239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:05.739 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.204528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.204560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.204890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.204920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.205231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.205262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.205558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.205572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.205845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.205859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.206130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.206172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.206409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.206441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.206750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.206781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.207106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.207136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.207423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.207453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.207733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.207762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.208013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.208043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.208343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.013 [2024-07-15 20:52:40.208357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.013 qpair failed and we were unable to recover it. 00:27:06.013 [2024-07-15 20:52:40.208544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.208558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.208752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.208765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.209049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.209079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.209276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.209306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.209533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.209562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.209838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.209867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.210188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.210217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.210442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.210473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.210751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.210782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.211089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.211119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.211422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.211458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.211673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.211687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.211933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.211947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.212123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.212137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.212349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.212363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.212584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.212597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.212867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.212881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.213125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.213138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.213313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.213327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.213510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.213540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.213837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.213867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.214174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.214208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.214453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.214484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.214835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.214864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.215084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.215114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.215368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.215400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.215689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.215719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.216027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.216057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.216363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.216394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.216620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.216655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.216798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.216812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.217003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.217016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.217139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.217153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.217338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.217352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.217576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.217607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.217976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.218007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.218258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.218288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.218568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.218598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.218825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.218855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.219087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.219117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.219417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.219448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.014 [2024-07-15 20:52:40.219626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.014 [2024-07-15 20:52:40.219657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.014 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.219955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.219985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.220217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.220256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.220494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.220524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.220765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.220795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.221099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.221130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.221457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.221488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.221718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.221734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.222002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.222016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.222264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.222278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.222525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.222538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.222807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.222820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.223067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.223081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.223278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.223292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.223551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.223565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.223780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.223794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.223975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.223989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.224245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.224276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.224520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.224549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.224801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.224831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.225053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.225082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.225317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.225331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.225527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.225541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.225794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.225808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.225939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.225953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.226194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.226260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.226516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.226547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.226720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.226750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.227052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.227082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.227369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.227401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.227740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.227770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.228076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.228105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.228405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.228419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.228679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.228693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.228940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.228956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.229170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.229183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.229429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.229444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.229630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.229660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.229914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.229943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.230243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.230274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.230507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.230537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.230834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.230864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.015 [2024-07-15 20:52:40.231173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.015 [2024-07-15 20:52:40.231203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.015 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.231367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.231398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.231690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.231720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.232001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.232031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.232354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.232368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.232587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.232600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.232929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.232998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.233328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.233362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.233679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.233709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.233994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.234024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.234330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.234361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.234665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.234694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.235011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.235041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.235339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.235370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.235638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.235648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.235915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.235924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.236182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.236192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.236430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.236440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.236702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.236711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.236977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.236990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.237250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.237260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.237546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.237556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.237823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.237833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.238023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.238033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.238203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.238252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.238500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.238529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.238819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.238848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.239151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.239180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.239493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.239523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.239822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.239852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.240159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.240187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.240419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.240429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.240664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.240673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.240936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.240945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.241215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.241228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.241495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.241505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.241771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.241781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.241962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.241972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.242176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.242187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.242404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.242414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.242545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.242555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.242794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.242804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.016 [2024-07-15 20:52:40.242935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.016 [2024-07-15 20:52:40.242945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.016 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.243100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.243110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.243281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.243308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.243636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.243665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.244053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.244123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.244508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.244562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.244888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.244925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.245230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.245242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.245427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.245438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.245676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.245686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.245869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.245879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.246074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.246103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.246343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.246374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.246607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.246637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.246963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.246992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.247214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.247252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.247538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.247568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.247847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.247881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.248162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.248191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.248522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.248553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.248827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.248839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.249081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.249091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.249281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.249292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.249533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.249543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.249673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.249683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.249873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.249883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.250124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.250135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.250282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.250293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.250434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.250444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.250688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.250717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.250886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.250916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.251093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.251123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.017 [2024-07-15 20:52:40.251436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.017 [2024-07-15 20:52:40.251466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.017 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.251741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.251770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.251934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.251964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.252246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.252276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.252504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.252514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.252754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.252764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.253006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.253016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.253198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.253208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.253400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.253430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.253608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.253637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.253933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.253962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.254123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.254152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.254417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.254460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.254772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.254786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.255099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.255129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.255453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.255484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.255767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.255796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.256108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.256138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.256390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.256404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.256635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.256648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.256844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.256858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.257105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.257118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.257414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.257428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.257695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.257709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.257902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.257915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.258118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.258135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.258401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.258431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.258737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.258767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.259002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.259032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.259338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.259368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.259666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.259679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.259874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.259888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.260033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.260047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.260300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.260331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.260683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.260713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.260963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.260993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.261278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.261310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.261530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.261544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.261816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.261829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.262028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.262041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.018 [2024-07-15 20:52:40.262221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.018 [2024-07-15 20:52:40.262239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.018 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.262384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.262397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.262603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.262617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.262758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.262772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.263080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.263094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.263312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.263327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.263519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.263533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.263707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.263721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.264011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.264041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.264284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.264315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.264498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.264512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.264785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.264799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.265036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.265050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.265266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.265281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.265493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.265507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.265753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.265767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.266019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.266033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.266167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.266181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.266367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.266381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.266589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.266602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.266853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.266867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.267062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.267076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.267389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.267403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.267553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.267567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.267777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.267807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.268135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.268171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.268463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.268494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.268807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.268837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.269134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.269163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.269481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.269512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.269762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.269775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.270069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.270082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.270337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.270351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.270596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.270609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.270753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.270767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.271042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.271071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.271404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.271434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.271727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.271756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.272010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.272040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.272264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.272295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.272578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.272607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.272752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.272782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.272993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.019 [2024-07-15 20:52:40.273023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.019 qpair failed and we were unable to recover it. 00:27:06.019 [2024-07-15 20:52:40.273351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.273382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.273656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.273686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.273940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.273970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.274277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.274319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.274559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.274573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.274846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.274859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.275116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.275130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.275392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.275406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.275599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.275613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.275844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.275881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.276086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.276114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.276343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.276355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.276547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.276558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.276852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.276882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.277094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.277124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.277448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.277458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.277751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.277781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.278109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.278138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.278430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.278461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.278772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.278801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.279032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.279062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.279372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.279403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.279707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.279736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.280046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.280076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.280376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.280406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.280732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.280742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.281053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.281082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.281311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.281342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.281581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.281610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.281921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.281950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.282256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.282285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.282570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.282580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.282756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.282777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.282965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.282994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.283250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.283280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.283583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.283613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.283919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.283949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.284256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.284294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.284583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.284593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.284851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.284861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.285129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.285139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.285398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.285408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.020 [2024-07-15 20:52:40.285546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.020 [2024-07-15 20:52:40.285556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.020 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.285687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.285697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.285930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.285940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.286125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.286135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.286272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.286282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.286454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.286464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.286676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.286705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.286930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.286964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.287252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.287282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.287590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.287620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.287920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.287950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.288116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.288145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.288445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.288474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.288780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.288809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.289115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.289144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.289456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.289486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.289785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.289815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.290050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.290080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.290391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.290431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.290605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.290615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.290803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.290813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.291051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.291060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.291239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.291249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.291508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.291538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.291845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.291874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.292181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.292211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.292505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.292535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.292838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.292867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.293174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.293203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.293503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.293514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.293769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.293779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.294032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.294042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.294211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.294222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.294400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.294410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.294646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.294675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.294984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.295013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.295271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.295303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.295611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.295640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.295904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.295933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.296215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.296254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.296500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.296529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.296833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.296862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.297170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.297199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.021 [2024-07-15 20:52:40.297606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.021 [2024-07-15 20:52:40.297675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.021 qpair failed and we were unable to recover it. 00:27:06.022 [2024-07-15 20:52:40.297995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.022 [2024-07-15 20:52:40.298010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.022 qpair failed and we were unable to recover it. 00:27:06.022 [2024-07-15 20:52:40.298267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.022 [2024-07-15 20:52:40.298283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.022 qpair failed and we were unable to recover it. 00:27:06.022 [2024-07-15 20:52:40.298557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.022 [2024-07-15 20:52:40.298572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.022 qpair failed and we were unable to recover it. 00:27:06.022 [2024-07-15 20:52:40.298865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.022 [2024-07-15 20:52:40.298883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.022 qpair failed and we were unable to recover it. 00:27:06.022 [2024-07-15 20:52:40.299028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.022 [2024-07-15 20:52:40.299042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.022 qpair failed and we were unable to recover it. 00:27:06.022 [2024-07-15 20:52:40.299357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.022 [2024-07-15 20:52:40.299371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.022 qpair failed and we were unable to recover it. 00:27:06.022 [2024-07-15 20:52:40.299662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.299676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.299934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.299948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.300219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.300238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.300496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.300510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.300698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.300711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.300999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.301012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.301203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.301217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.301419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.301434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.301623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.301637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.301905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.301919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.302205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.302218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.302479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.302494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.302674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.302689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.302931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.302944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.303132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.303146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.303370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.303383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.303634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.303647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.303960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.303989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.304300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.304330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.304605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.304619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.304856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.304870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.305054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.305068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.305348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.305378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.305656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.305670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.305906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.305922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.306170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.306184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.306381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.306395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.306661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.306691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.306994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.307025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.307332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.307362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.307644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.307674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.307997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.308027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.308319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.308350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.308653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.308667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.308979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.308993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.309246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.309260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.309523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.309537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.309787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.309801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.309998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.310012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.310282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.310311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.310622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.310652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.310950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.023 [2024-07-15 20:52:40.310964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.023 qpair failed and we were unable to recover it. 00:27:06.023 [2024-07-15 20:52:40.311237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.311252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.311437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.311451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.311750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.311764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.311891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.311905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.312093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.312126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.312422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.312452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.312756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.312786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.313018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.313048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.313281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.313311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.313518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.313532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.313809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.313822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.314087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.314100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.314390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.314403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.314585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.314599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.314845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.314875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.315176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.315206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.315513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.315544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.315843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.315872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.316174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.316204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.316497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.316528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.316842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.316856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.317124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.317138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.317263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.317278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.317501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.317535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.317715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.317745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.317994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.318023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.318304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.318334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.318569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.318599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.318901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.318931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.319159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.319189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.319505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.319535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.319818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.319847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.320147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.320176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.320491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.320520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.320816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.320845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.321152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.321181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.321488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.321524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.321800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.321830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.322054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.322083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.322317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.322348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.322648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.322658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.322827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.322837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.323038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.024 [2024-07-15 20:52:40.323067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.024 qpair failed and we were unable to recover it. 00:27:06.024 [2024-07-15 20:52:40.323378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.323408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.323705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.323735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.324045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.324075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.324378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.324408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.324682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.324692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.324955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.324964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.325204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.325213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.325465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.325475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.325695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.325705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.325991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.326001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.326278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.326288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.326472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.326482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.326668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.326678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.326941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.326971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.327183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.327212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.327517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.327547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.327851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.327880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.328132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.328161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.328382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.328413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.328706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.328716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.328973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.328983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.329250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.329261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.329547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.329557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.329737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.329746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.330027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.330037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.330321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.330331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.330617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.330626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.330869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.330879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.331136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.331146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.331405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.331415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.331603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.331612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.331846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.331856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.332077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.332106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.332394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.332430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.332751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.332780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.333093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.333122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.333422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.333452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.333705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.333734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.334017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.334027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.334274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.334285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.334536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.334545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.334787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.334796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.335056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.335066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.025 [2024-07-15 20:52:40.335279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.025 [2024-07-15 20:52:40.335290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.025 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.335422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.335432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.335719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.335748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.336070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.336100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.336395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.336427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.336734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.336763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.337044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.337073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.337370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.337401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.337629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.337657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.337960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.337990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.338282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.338312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.338640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.338670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.338950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.338960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.339095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.339104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.339305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.339315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.339529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.339539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.339797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.339807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.340063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.340073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.340313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.340323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.340502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.340512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.340724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.340733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.340964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.340994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.341296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.341326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.341634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.341663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.341886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.341914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.342210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.342220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.342489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.342500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.342687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.342697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.342884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.342913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.343139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.343168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.343474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.343505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.343811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.343840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.344135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.344144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.344406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.344416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.344658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.344667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.026 qpair failed and we were unable to recover it. 00:27:06.026 [2024-07-15 20:52:40.344874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.026 [2024-07-15 20:52:40.344883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.345142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.345152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.345282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.345292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.345498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.345508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.345694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.345704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.345961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.345970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.346251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.346261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.346504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.346513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.346769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.346779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.347040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.347050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.347242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.347252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.347447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.347477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.347634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.347663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.347877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.347906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.348057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.348067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.348198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.348208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.348408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.348419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.348631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.348640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.348744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.348753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.349023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.349033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.349312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.349323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.349461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.349470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.349735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.349768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.350117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.350145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.350448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.350479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.350704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.350733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.351029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.351058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.351356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.351386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.351677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.351707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.352008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.352037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.352209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.352247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.352506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.352535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.352784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.352793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.352980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.352990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.353262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.353292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.353514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.353543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.353853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.353883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.354168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.354197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.354512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.354543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.027 [2024-07-15 20:52:40.354836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.027 [2024-07-15 20:52:40.354846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.027 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.354969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.354979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.355239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.355250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.355517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.355545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.355774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.355802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.356104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.356133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.356356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.356386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.356682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.356710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.356947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.356957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.357128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.357138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.357386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.357417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.357590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.357599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.357861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.357870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.358143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.358172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.358409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.358439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.358664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.358674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.358848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.358858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.359102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.359131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.359434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.359464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.359767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.359796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.360018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.360046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.360336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.360366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.360603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.360632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.360930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.360941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.361117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.361127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.361307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.361317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.361580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.361590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.361790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.361799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.362059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.362088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.362388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.362418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.362630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.362659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.362874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.362884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.363009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.363019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.363238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.363248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.363434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.363444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.363617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.363627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.363904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.363933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.364220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.364260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.364565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.364595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.364833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.364843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.365064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.365082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.365330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.365340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.365582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.028 [2024-07-15 20:52:40.365592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.028 qpair failed and we were unable to recover it. 00:27:06.028 [2024-07-15 20:52:40.365873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.365883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.366008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.366018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.366273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.366283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.366485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.366495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.366692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.366702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.366948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.366978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.367144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.367173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.367341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.367371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.367675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.367705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.368006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.368035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.368247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.368258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.368440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.368450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.368712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.368721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.368985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.368995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.369248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.369258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.369501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.369511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.369749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.369759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.369944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.369954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.370142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.370171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.370347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.370377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.370598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.370633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.370847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.370858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.371118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.371127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.371363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.371374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.371491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.371501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.371766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.371795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.371947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.371976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.372201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.372236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.372489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.372518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.372824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.372852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.373165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.373175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.373444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.373455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.373698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.373708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.373827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.373837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.374045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.374055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.374331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.374341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.374607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.374617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.374788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.374798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.375041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.375070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.375388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.375419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.375728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.375739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.375980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.375990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.376240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.029 [2024-07-15 20:52:40.376250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.029 qpair failed and we were unable to recover it. 00:27:06.029 [2024-07-15 20:52:40.376519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.376529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.376719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.376729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.376993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.377023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.377303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.377335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.377643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.377653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.377871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.377881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.378141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.378151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.378341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.378351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.378458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.378467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.378669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.378679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.378860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.378870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.379137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.379167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.379446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.379475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.379751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.379779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.380073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.380083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.380259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.380269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.380505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.380514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.380706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.380718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.380852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.380861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.381052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.381062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.381258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.381289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.381599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.381629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.381913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.381923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.382122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.382151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.382456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.382486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.382830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.382865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.383051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.383061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.383308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.383338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.383545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.383555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.383843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.383872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.384088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.384116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.384420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.384452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.384756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.384777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.385033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.385043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.385297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.385307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.385627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.385656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.385900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.385929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.386236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.386247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.386447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.386457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.030 qpair failed and we were unable to recover it. 00:27:06.030 [2024-07-15 20:52:40.386666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.030 [2024-07-15 20:52:40.386675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.386864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.386894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.387147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.387176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.387485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.387515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.387818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.387847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.388156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.388186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.388436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.388466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.388770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.388780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.389036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.389046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.389289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.389300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.389558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.389568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.389844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.389864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.390071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.390081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.390292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.390302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.390517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.390527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.390807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.390817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.391078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.391087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.391271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.391281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.391484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.391519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.391806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.391836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.392044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.392054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.392345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.392375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.392651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.392680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.392830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.392859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.393069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.393079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.393246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.393256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.393522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.393551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.393834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.393863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.394118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.394147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.394440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.394471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.394780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.394809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.395126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.395155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.395468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.395499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.395728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.395757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.396075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.396276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.031 [2024-07-15 20:52:40.396286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.031 qpair failed and we were unable to recover it. 00:27:06.031 [2024-07-15 20:52:40.396456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.396466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.396742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.396770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.396984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.397013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.397295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.397326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.397648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.397677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.397894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.397924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.398222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.398261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.398567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.398596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.398809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.398839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.399141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.399151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.399348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.399359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.399600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.399609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.399810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.399819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.400010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.400020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.400264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.400274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.400395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.400405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.400602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.400612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.400809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.400819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.401082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.401092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.401231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.401241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.401438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.401448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.401695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.401705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.401919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.401931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.402220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.402235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.402518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.402528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.402773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.402783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.403019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.403028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.403300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.403310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.403485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.403495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.403693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.403724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.403892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.403921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.404179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.404209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.404508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.404540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.404794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.404825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.405105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.405114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.405377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.405387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.405565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.405576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.405748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.405757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.405964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.405973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.406261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.406271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.406454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.406463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.406746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.032 [2024-07-15 20:52:40.406756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.032 qpair failed and we were unable to recover it. 00:27:06.032 [2024-07-15 20:52:40.406977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.406987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.407249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.407259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.407378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.407389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.407573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.407583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.407812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.407822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.408074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.408085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.408386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.408396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.408575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.408585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.408819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.408829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.409033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.409044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.409160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.409170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.409429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.409440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.409632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.409642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.409883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.409893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.410106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.410116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.410354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.410364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.410600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.410610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.410795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.410806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.410930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.410941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.411201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.411210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.411483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.411496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.411688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.411697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.411928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.411939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.412125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.412135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.412257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.412270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.412517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.412527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.412654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.412664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.412949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.412958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.413148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.413157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.413436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.413446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.413712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.413721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.413862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.413872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.414000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.414009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.414270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.414280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.414536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.414546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.414806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.414817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.414998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.415008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.415245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.415255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.415452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.415461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.415605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.415615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.415763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.415773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.415969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.033 [2024-07-15 20:52:40.415979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.033 qpair failed and we were unable to recover it. 00:27:06.033 [2024-07-15 20:52:40.416265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.416275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.416454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.416464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.416668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.416678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.416798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.416808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.417113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.417123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.417294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.417305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.417499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.417509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.417717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.417727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.417951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.417961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.418082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.418092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.418377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.418387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.418571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.418581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.418766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.418776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.418909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.418919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.419112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.419122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.419310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.419320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.419574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.419584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.419770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.419780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.419950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.419962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.420133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.420143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.420381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.420392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.420632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.420642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.420914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.420924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.421174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.421184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.421438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.421448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.421628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.421638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.421877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.421887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.422125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.422135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.422253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.422263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.422545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.422555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.422816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.422826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.423018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.423028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.423315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.423325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.423608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.423617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.423817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.423827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.424119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.424128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.424344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.424354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.424496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.424506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.424770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.424780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.424985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.424994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.425187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.425196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.425340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.425350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.425585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.425594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.034 [2024-07-15 20:52:40.425734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.034 [2024-07-15 20:52:40.425744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.034 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.425921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.425931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.426156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.426166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.426334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.426344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.426579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.426589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.426778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.426788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.427051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.427061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.427320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.427331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.427452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.427462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.427648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.427659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.427872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.427882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.428000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.428010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.428176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.428186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.428358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.428368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.428606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.428616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.428862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.428897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.429189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.429218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.429532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.429563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.429726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.429755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.429982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.429992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.430247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.430258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.430507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.430517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.430728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.430738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.431051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.431062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.431199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.431209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.431490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.431520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.431696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.431726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.432034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.432063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.432322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.432355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.432665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.432695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.433007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.433019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.433290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.433301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.433461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.433471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.433651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.433661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.433856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.433885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.434177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.434206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.035 [2024-07-15 20:52:40.434470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.035 [2024-07-15 20:52:40.434502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.035 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.434750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.434760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.434951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.434961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.435177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.435188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.435332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.435343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.435607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.435616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.435779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.435813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.436043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.436077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.436333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.436349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.436577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.436592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.436713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.436728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.436906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.436920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.437179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.437209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.437403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.437433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.437606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.437636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.437906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.437919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.438110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.438123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.438315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.438329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.438487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.438519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.438739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.438778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.439142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.439171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.439459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.439490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.439799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.439829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.440149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.440177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.440493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.440524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.440754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.440783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.441084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.441114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.441423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.441453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.441754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.441784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.442032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.442045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.442300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.442314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.442558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.442574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.442677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.442691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.442904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.442919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.443098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.443112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.443390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.443404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.443622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.443635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.443772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.443784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.444028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.444041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.444295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.444309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.444487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.444500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.444642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.444655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.444788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.444802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.445082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.445112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.036 qpair failed and we were unable to recover it. 00:27:06.036 [2024-07-15 20:52:40.445369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.036 [2024-07-15 20:52:40.445401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.445553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.445582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.445811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.445845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.446125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.446155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.446376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.446407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.446623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.446652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.446829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.446845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.447144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.447175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.447538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.447569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.447802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.447831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.448091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.448127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.448319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.448333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.448515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.448529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.448688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.448701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.448945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.448958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.449155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.449171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.449465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.449479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.449549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8000 is same with the state(5) to be set 00:27:06.037 [2024-07-15 20:52:40.449775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.449786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.449908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.449918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.450183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.450211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.450440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.450470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.450708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.450737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.451038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.451048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.451172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.451182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.451444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.451455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.451691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.451701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.451939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.451948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.452142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.452152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.452411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.452424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.452704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.452715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.452886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.452896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.453091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.453121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.453350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.453381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.453609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.453638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.453956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.453984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.454294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.454304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.454497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.454508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.454687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.454697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.454984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.454993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.455160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.455171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.455303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.455314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.455504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.455513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.037 qpair failed and we were unable to recover it. 00:27:06.037 [2024-07-15 20:52:40.455693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.037 [2024-07-15 20:52:40.455723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.455896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.455925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.456096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.456125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.456452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.456462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.456633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.456643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.456899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.456909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.457096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.457125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.457426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.457457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.457670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.457700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.457938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.457967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.458184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.458194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.458380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.458390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.458574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.458584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.458772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.458782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.459020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.459050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.459327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.459358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.459634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.459663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.459933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.459943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.460247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.460257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.460465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.460474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.460599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.460609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.460793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.460803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.461002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.461012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.461192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.461202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.461327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.461337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.461506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.461516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.461695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.461708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.461884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.461896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.462083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.462094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.462309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.462319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.462438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.462448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.462661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.462671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.462907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.462917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.463086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.463099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.463331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.463361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.463592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.463621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.463855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.463864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.464049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.464059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.464250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.464260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.464498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.464527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.464766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.464795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.038 [2024-07-15 20:52:40.465086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.038 [2024-07-15 20:52:40.465116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.038 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.465330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.465361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.465539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.465568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.465840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.465850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.466115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.466126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.466368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.466378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.466634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.466644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.466833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.466843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.467089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.467118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.467415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.467446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.467701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.467730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.467950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.467979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.468240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.468251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.468446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.468457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.468576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.468586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.468778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.468788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.469089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.469099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.469233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.469243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.469357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.469367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.469555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.469565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.469813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.469823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.470045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.470055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.470345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.470356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.470539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.470551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.470758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.470768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.470892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.470904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.471089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.471099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.471242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.471253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.471536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.471546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.471814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.471824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.472086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.472097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.472372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.472383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.472563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.472573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.472784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.472794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.473056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.473066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.473254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.473264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.039 qpair failed and we were unable to recover it. 00:27:06.039 [2024-07-15 20:52:40.473384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.039 [2024-07-15 20:52:40.473394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.473512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.473522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.473700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.473710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.473898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.473909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.474025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.474035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.474158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.474168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.474351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.474361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.474518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.474527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.474653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.474663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.474833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.474843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.474983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.474992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.475239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.475249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.475377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.475387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.475637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.475647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.475904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.475914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.476155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.476164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.476372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.476408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.476636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.476651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.476851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.476867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.477086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.477099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.477362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.477377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.477530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.477544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.477762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.477777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.478023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.478037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.478306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.478320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.478451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.478464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.478606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.478620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.478813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.478827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.479056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.479070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.479200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.479214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.479449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.479463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.479667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.479681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.479810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.479824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.480117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.480131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.480272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.480285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.480428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.480442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.040 [2024-07-15 20:52:40.480567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.040 [2024-07-15 20:52:40.480581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.040 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.480785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.480800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.480930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.480944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.481170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.481184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.481318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.481333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.481453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.481467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.481667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.481680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.481804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.481816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.481929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.481939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.482202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.315 [2024-07-15 20:52:40.482211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.315 qpair failed and we were unable to recover it. 00:27:06.315 [2024-07-15 20:52:40.482395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.482405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.482590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.482600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.482874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.482883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.483067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.483077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.483250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.483259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.483449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.483459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.483630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.483640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.483747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.483758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.483947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.483957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.484132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.484142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.484400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.484410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.484533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.484543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.484737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.484746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.485003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.485013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.485201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.485211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.485387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.485398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.485587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.485596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.485793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.485803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.485999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.486009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.486197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.486207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.486383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.486393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.486588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.486597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.486882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.486892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.487139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.487148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.487407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.487418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.487675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.487685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.487800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.487810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.488104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.488114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.488347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.488358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.488546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.488556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.488792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.488802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.488939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.488949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.316 qpair failed and we were unable to recover it. 00:27:06.316 [2024-07-15 20:52:40.489193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.316 [2024-07-15 20:52:40.489203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.489411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.489422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.489592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.489601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.489889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.489899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.490111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.490121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.490298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.490310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.490489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.490499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.490673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.490682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.490964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.490973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.491152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.491162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.491344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.491354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.491535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.491545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.491806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.491816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.492135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.492145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.492407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.492417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.492598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.492607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.492843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.492853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.493068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.493077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.493341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.493351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.493524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.493534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.493734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.493744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.493945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.493955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.494143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.494153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.494361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.494371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.494640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.494650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.494898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.494908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.495088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.495098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.495377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.495387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.495622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.495632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.495878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.495888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.496122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.496132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.496302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.496313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.496515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.496525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.496712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.317 [2024-07-15 20:52:40.496721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.317 qpair failed and we were unable to recover it. 00:27:06.317 [2024-07-15 20:52:40.496982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.496991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.497188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.497198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.497405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.497416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.497688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.497698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.497960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.497969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.498158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.498167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.498426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.498437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.498626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.498635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.498871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.498881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.499116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.499126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.499359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.499370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.499559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.499571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.499809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.499819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.499935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.499944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.500201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.500212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.500497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.500508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.500742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.500752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.500994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.501004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.501211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.501220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.501515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.501525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.501705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.501715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.501939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.501949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.502118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.502128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.502317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.502328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.502495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.502504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.502753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.502764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.502999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.503009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.503202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.503212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.503536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.503546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.503680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.503690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.503900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.503909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.504172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.504182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.504367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.504378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.504640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.318 [2024-07-15 20:52:40.504650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.318 qpair failed and we were unable to recover it. 00:27:06.318 [2024-07-15 20:52:40.504909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.504919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.505104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.505114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.505373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.505384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.505677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.505687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.505946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.505956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.506133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.506143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.506383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.506393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.506628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.506638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.506890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.506900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.507161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.507171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.507293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.507304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.507489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.507498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.507666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.507676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.507928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.507938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.508193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.508202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.508379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.508390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.508651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.508661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.508838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.508850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.509032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.509042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.509247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.509258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.509496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.509506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.509695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.509705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.509891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.509901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.510132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.510142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.510395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.510405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.510590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.510599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.510865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.510875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.511047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.511057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.511236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.511262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.511532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.511541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.511798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.511808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.511996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.512006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.319 [2024-07-15 20:52:40.512197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.319 [2024-07-15 20:52:40.512207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.319 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.512464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.512474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.512709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.512718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.512885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.512895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.513136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.513146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.513361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.513372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.513559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.513569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.513741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.513751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.513958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.513967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.514216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.514272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.514562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.514592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.514885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.514914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.515163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.515193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.515432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.515463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.515762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.515790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.516084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.516113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.516421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.516451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.516751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.516781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.517026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.517055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.517308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.517339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.517644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.517653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.517782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.517792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.518077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.518087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.518274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.518285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.518492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.518502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.518742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.518754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.518964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.518974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.320 [2024-07-15 20:52:40.519239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.320 [2024-07-15 20:52:40.519249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.320 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.519451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.519461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.519654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.519663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.519862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.519871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.520150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.520180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.520420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.520450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.520612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.520640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.520887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.520917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.521158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.521188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.521386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.521418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.521628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.521659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.521886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.521916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.522237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.522269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.522491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.522501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.522678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.522707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.522940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.522970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.523195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.523242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.523432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.523442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.523582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.523592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.523780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.523789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.523983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.524003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.524186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.524207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.524400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.524411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.524542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.524552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.524739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.524749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.525008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.525018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.525277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.525287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.525475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.525485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.525748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.525777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.526048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.526077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.526297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.526307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.526491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.526501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.321 qpair failed and we were unable to recover it. 00:27:06.321 [2024-07-15 20:52:40.526710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.321 [2024-07-15 20:52:40.526739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.526984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.527012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.527292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.527302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.527540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.527549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.527813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.527822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.528078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.528088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.528337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.528350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.528538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.528548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.528799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.528828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.529049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.529078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.529409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.529440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.529595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.529623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.529948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.529977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.530276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.530306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.530545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.530574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.530817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.530846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.531147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.531176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.531460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.531491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.531822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.531851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.532162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.532190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.532527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.532594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.532954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.532987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.533214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.533256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.533490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.533521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.533753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.533783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.534019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.534033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.534276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.534290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.534481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.534495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.534765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.534794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.535030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.535061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.535389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.535420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.535700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.535730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.536014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.536044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.322 [2024-07-15 20:52:40.536401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.322 [2024-07-15 20:52:40.536431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.322 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.536735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.536765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.537014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.537044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.537255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.537286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.537467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.537497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.537714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.537744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.538000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.538030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.538278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.538292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.538491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.538504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.538720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.538750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.539009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.539039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.539323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.539354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.539593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.539623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.539910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.539945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.540133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.540162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.540442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.540473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.540779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.540809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.541065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.541079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.541350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.541364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.541650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.541678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.541979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.542008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.542264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.542295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.542474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.542504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.542809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.542839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.543062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.543091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.543404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.543434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.543667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.543697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.544008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.544037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.544362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.544392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.544581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.544610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.544785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.544815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.545161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.545193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.545485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.323 [2024-07-15 20:52:40.545516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.323 qpair failed and we were unable to recover it. 00:27:06.323 [2024-07-15 20:52:40.545754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.545783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.546026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.546055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.546233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.546246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.546446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.546476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.546726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.546755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.547110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.547139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.547367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.547397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.547574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.547605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.547773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.547802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.548121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.548150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.548464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.548494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.548725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.548755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.549121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.549151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.549443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.549456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.549673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.549686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.549837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.549850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.550117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.550146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.550413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.550444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.550686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.550716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.550939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.550968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.551188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.551204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.551334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.551348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.551485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.551499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.551642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.551655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.551856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.551870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.552058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.552072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.552266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.552281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.552402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.552415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.552560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.552573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.552747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.552761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.552969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.552998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.553289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.553319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.553488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.553519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.553697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.324 [2024-07-15 20:52:40.553726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.324 qpair failed and we were unable to recover it. 00:27:06.324 [2024-07-15 20:52:40.553882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.553912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.554139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.554170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.554464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.554494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.554757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.554786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.555009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.555038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.555300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.555330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.555520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.555550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.555830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.555859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.556102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.556132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.556392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.556423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.556615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.556629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.556814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.556828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.556948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.556962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.557281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.557306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.557444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.557456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.557716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.557747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.557985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.558014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.558321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.558352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.558602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.558632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.558868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.558897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.559171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.559200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.559444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.559455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.559692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.559702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.559977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.559987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.560167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.560177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.560417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.560428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.560624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.560662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.560907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.560936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.561143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.561172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.561345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.561376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.561565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.561595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.561882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.325 [2024-07-15 20:52:40.561911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.325 qpair failed and we were unable to recover it. 00:27:06.325 [2024-07-15 20:52:40.562192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.562222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.562452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.562462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.562649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.562659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.562863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.562892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.563172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.563201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.563460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.563491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.563719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.563748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.563977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.564006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.564245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.564277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.564502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.564513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.564746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.564756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.564957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.564967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.565285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.565316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.565534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.565563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.565795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.565824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.565980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.566009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.566314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.566343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.566559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.566588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.566810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.566839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.567054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.567084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.567314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.567343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.567709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.567776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.568028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.568060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.568392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.568429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.568664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.568695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.569014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.569043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.326 [2024-07-15 20:52:40.569282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.326 [2024-07-15 20:52:40.569312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.326 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.569612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.569641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.569966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.569995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.570222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.570264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.570568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.570582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.570862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.570876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.571158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.571171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.571442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.571456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.571666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.571683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.571962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.571976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.572114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.572128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.572368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.572398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.572580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.572610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.572833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.572862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.573163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.573192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.573500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.573530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.573765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.573796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.574101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.574130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.574437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.574467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.574749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.574778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.575173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.575203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.575504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.575518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.575766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.575782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.576065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.576080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.576217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.576236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.576487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.576517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.576693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.576723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.576971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.577001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.577324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.577357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.577579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.577608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.577845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.577875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.578088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.578117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.578404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.327 [2024-07-15 20:52:40.578435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.327 qpair failed and we were unable to recover it. 00:27:06.327 [2024-07-15 20:52:40.578660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.578674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.578986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.579016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.579327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.579357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.579574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.579588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.579731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.579745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.579957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.579986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.580267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.580297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.580628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.580657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.580913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.580942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.581152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.581165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.581429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.581443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.581632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.581646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.581863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.581876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.582143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.582156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.582411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.582425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.582667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.582683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.582954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.582983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.583294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.583324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.583562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.583576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.583802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.583815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.584082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.584095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.584284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.584298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.584550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.584563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.584707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.584720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.584911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.584923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.585170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.585184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.585398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.585412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.585552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.585565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.585839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.585869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.586157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.586187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.586448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.586462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.586669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.586698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.586989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.328 [2024-07-15 20:52:40.587017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.328 qpair failed and we were unable to recover it. 00:27:06.328 [2024-07-15 20:52:40.587327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.587357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.587662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.587691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.588008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.588038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.588355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.588368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.588519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.588533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.588739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.588768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.589098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.589127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.589304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.589334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.589613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.589642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.589987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.590056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.590337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.590371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.590680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.590695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.590986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.591001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.591259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.591273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.591459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.591472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.591712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.591726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.591918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.591931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.592199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.592212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.592444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.592458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.592654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.592668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.592859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.592873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.593098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.593128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.593287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.593325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.593644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.593674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.593905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.593934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.594193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.594222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.594462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.594476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.594618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.594631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.594761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.594776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.595088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.595102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.595315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.595329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.595528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.595542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.595736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.329 [2024-07-15 20:52:40.595749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.329 qpair failed and we were unable to recover it. 00:27:06.329 [2024-07-15 20:52:40.595888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.595929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.596176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.596205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.596439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.596470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.596626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.596640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.596830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.596843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.597091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.597121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.597432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.597462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.597720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.597750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.597985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.598015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.598257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.598272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.598392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.598405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.598677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.598690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.598941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.598955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.599235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.599249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.599393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.599406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.599611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.599625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.599822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.599849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.600168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.600180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.600505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.600537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.600723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.600755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.600995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.601024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.601324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.601355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.601656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.601685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.601861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.601890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.602175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.602206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.602447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.602457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.603635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.603655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.603789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.603802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.604002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.604014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.604251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.604286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.604471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.604502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.604719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.604751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.604994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.605025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.605250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.605283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.605523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.330 [2024-07-15 20:52:40.605554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.330 qpair failed and we were unable to recover it. 00:27:06.330 [2024-07-15 20:52:40.605735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.605765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.606609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.606629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.606782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.606814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.607041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.607073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.607289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.607301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.607500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.607530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.607676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.607706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.608032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.608063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.608358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.608370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.608502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.608513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.608717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.608729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.608953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.608985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.609302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.609333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.609618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.609649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.609821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.609853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.610078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.610108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.610316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.610328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.610563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.610593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.610886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.610917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.611204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.611243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.611611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.611642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.331 qpair failed and we were unable to recover it. 00:27:06.331 [2024-07-15 20:52:40.611965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.331 [2024-07-15 20:52:40.612035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.612247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.612283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.612523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.612555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.612860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.612891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.613214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.613254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.613483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.613498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.613624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.613639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.613892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.613923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.614080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.614111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.614344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.614375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.614554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.614585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.615955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.615983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.616292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.616310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.616453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.616468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.617242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.617267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.617648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.617682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.617919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.617950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.618210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.618255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.618522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.618553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.618784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.618815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.619069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.619100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.619406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.619439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.620607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.620632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.620965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.620980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.621259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.621292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.621536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.621568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.621759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.621774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.622092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.622129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.622314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.622347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.622594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.622625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.622840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.622871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.623101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.623131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.623387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.623420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.332 [2024-07-15 20:52:40.623621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.332 [2024-07-15 20:52:40.623652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.332 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.623905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.623935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.624244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.624276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.624506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.624538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.624774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.624806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.625024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.625055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.625293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.625325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.625599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.625630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.625882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.625914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.626148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.626178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.626531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.626565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.626818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.626849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.627090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.627121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.627410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.627426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.627642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.627657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.627867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.627882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.628070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.628086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.628235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.628250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.628442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.628473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.628719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.628750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.628998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.629028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.629263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.629299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.629609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.629639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.629964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.629995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.630220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.630263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.630498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.630529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.631833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.631858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.632080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.632095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.632387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.632402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.632604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.632619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.632762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.632777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.632989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.633004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.633164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.333 [2024-07-15 20:52:40.633179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.333 qpair failed and we were unable to recover it. 00:27:06.333 [2024-07-15 20:52:40.633447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.633463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.633612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.633643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.633963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.633995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.634325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.634357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.634577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.634608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.634867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.634882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.635038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.635069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.635300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.635333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.635596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.635627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.635800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.635831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.636062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.636093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.636393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.636409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.636683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.636714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.637034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.637065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.637351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.637366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.637579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.637595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.637841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.637874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.638035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.638066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.638326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.638342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.638562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.638577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.638775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.638790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.638993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.639009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.639236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.639252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.639498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.639515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.639759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.639791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.640076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.640108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.640328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.640361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.640525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.640540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.640744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.640774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.641078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.641110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.641346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.641377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.641543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.334 [2024-07-15 20:52:40.641558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.334 qpair failed and we were unable to recover it. 00:27:06.334 [2024-07-15 20:52:40.641830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.641860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.642140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.642171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.642452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.642484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.642668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.642711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.642996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.643028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.643339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.643373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.643611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.643642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.643814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.643829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.644087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.644117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.644368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.644400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.644628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.644659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.644832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.644864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.645192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.645223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.645414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.645446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.645634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.645665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.645894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.645910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.646197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.646237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.646519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.646550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.646841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.646857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.646991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.647007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.647302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.647334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.647510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.647541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.647773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.647803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.648015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.648045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.648270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.648307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.648593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.648624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.648909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.648924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.649734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.649761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.650071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.650102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.650296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.650312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.650455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.650470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.650659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.650689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.650970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.335 [2024-07-15 20:52:40.651001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.335 qpair failed and we were unable to recover it. 00:27:06.335 [2024-07-15 20:52:40.651342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.651373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.651544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.651575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.651834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.651848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.652135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.652165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.652368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.652400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.652716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.652747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.652979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.653013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.653342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.653374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.653556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.653588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.653849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.653879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.654104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.654136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.654312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.654328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.654626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.654656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.655026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.655056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.655304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.655336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.655638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.655668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.655831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.655862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.656089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.656119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.656363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.656400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.656634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.656665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.656837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.656851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.657113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.657127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.657326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.657342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.657613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.657644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.657813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.657844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.658088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.658119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.658299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.658315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.658590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.658620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.658863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.658894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.659095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.659125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.659405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.659437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.659645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.659676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.659946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.336 [2024-07-15 20:52:40.659977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.336 qpair failed and we were unable to recover it. 00:27:06.336 [2024-07-15 20:52:40.660210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.660320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.660610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.660642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.660957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.660973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.661240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.661256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.661476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.661492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.661624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.661639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.661837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.661867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.662130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.662161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.662399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.662431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.662661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.662704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.662907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.662924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.663167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.663183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.663500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.663538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.663765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.663796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.664034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.664066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.664310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.664333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.664516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.664531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.664739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.664770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.664923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.664954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.665257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.665291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.665521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.665536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.665755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.665770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.665953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.665968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.666191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.666222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.666487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.666520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.666687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.666725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.667011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.667045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.667262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.337 [2024-07-15 20:52:40.667307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.337 qpair failed and we were unable to recover it. 00:27:06.337 [2024-07-15 20:52:40.667611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.667643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.667877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.667909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.668212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.668253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.668554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.668585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.668810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.668841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.669129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.669160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.669468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.669502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.669821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.669837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.670026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.670042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.670295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.670311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.670466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.670497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.670691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.670731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.670959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.670989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.671221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.671261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.671494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.671510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.671703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.671718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.671931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.671946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.672237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.672270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.672587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.672619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.672922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.672953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.673268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.673300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.673530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.673562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.673791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.673824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.674072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.674103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.674396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.674428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.674602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.674634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.674923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.674956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.675154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.675185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.675388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.675420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.675697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.675712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.676031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.676047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.676321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.676337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.676544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.676560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.676706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.676722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.676937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.338 [2024-07-15 20:52:40.676967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.338 qpair failed and we were unable to recover it. 00:27:06.338 [2024-07-15 20:52:40.677206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.677246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.677472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.677503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.677730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.677760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.678119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.678189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.678515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.678585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.678831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.678844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.679023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.679035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.679341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.679374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.679656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.679688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.679911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.679923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.680160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.680191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.680374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.680405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.680623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.680654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.680868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.680880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.681151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.681182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.681505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.681538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.681828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.681867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.682106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.682137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.682318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.682350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.682531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.682542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.682818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.682849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.683128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.683159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.683476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.683509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.683801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.683832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.684061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.684092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.684317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.684357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.684602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.684639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.684859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.684890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.685057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.685087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.685333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.685364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.685663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.685695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.685948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.685960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.686202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.686215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.686407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.686438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.686671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.686702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.687001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.339 [2024-07-15 20:52:40.687032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.339 qpair failed and we were unable to recover it. 00:27:06.339 [2024-07-15 20:52:40.687260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.687292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.687573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.687604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.687781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.687812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.687977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.688008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.688269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.688301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.688477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.688515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.688632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.688643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.688850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.688882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.689166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.689197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.689532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.689601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.689955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.689996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.690245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.690279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.690526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.690558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.690844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.690875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.691194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.691234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.691510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.691542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.691766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.691782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.691983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.691998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.692232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.692248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.692513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.692528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.692657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.692676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.692904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.692936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.693218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.693260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.693492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.693508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.693715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.693747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.693981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.694012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.694191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.694223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.694493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.694525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.694757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.694790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.695093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.695125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.695307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.695339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.695571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.695601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.695841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.695873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.340 qpair failed and we were unable to recover it. 00:27:06.340 [2024-07-15 20:52:40.697399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.340 [2024-07-15 20:52:40.697427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.697703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.697737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.697911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.697942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.699119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.699145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.699458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.699492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.699802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.699834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.700060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.700092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.700278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.700310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.700591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.700623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.700876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.700906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.701207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.701247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.701502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.701534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.701776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.701807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.702041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.702072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.702421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.702492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.702779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.702808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.703108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.703143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.703390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.703423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.703658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.703690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.703954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.703985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.704163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.704194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.704519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.704551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.704722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.704753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.705064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.705096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.705352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.705386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.705615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.705627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.705804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.705835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.706009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.706048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.706275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.706307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.706580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.706592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.706780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.706792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.706997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.707009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.707239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.707271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.707417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.707448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.707678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.341 [2024-07-15 20:52:40.707709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.341 qpair failed and we were unable to recover it. 00:27:06.341 [2024-07-15 20:52:40.708037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.708067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.708302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.708335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.708504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.708536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.708774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.708805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.709096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.709127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.709306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.709338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.709549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.709581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.709860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.709891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.710203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.710243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.710475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.710507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.710825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.710855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.711051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.711082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.711368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.711400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.711637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.711667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.711918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.711949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.712178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.712209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.712573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.712605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.712899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.712930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.713208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.713251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.713567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.713608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.713847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.713879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.714102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.714133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.714390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.714423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.714662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.714694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.714867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.714899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.715202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.715241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.715495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.715527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.715863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.342 [2024-07-15 20:52:40.715894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.342 qpair failed and we were unable to recover it. 00:27:06.342 [2024-07-15 20:52:40.716125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.716155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.716431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.716464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.716694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.716725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.716970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.717002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.717354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.717385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.717608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.717640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.717902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.717934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.718271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.718304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.718472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.718503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.718782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.718813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.719094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.719125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.719417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.719449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.719625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.719656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.719829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.719845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.720058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.720088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.720372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.720405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.720629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.720644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.720774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.720789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.721036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.721068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.721309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.721341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.721579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.721610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.721945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.721975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.722145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.722176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.722406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.722437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.722746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.722777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.723068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.723100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.723424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.723456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.723765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.723797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.724089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.724120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.724368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.724400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.724680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.724711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.724965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.724979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.343 [2024-07-15 20:52:40.725240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.343 [2024-07-15 20:52:40.725272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.343 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.725571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.725602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.725825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.725840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.725978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.725994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.726274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.726305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.726531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.726547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.726819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.726850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.727081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.727111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.727420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.727453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.727757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.727789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.728103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.728135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.728378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.728410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.728584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.728599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.728802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.728838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.729152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.729183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.729450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.729483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.729719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.729751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.729911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.729942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.730215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.730257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.730513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.730545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.730717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.730732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.730968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.730999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.731268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.731301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.731494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.731525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.731766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.731796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.732077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.732093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.732427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.732460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.732638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.732670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.732850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.732881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.733184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.733216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.733480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.733512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.733674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.733705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.734048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.734079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.734248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.734281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.734508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.344 [2024-07-15 20:52:40.734548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.344 qpair failed and we were unable to recover it. 00:27:06.344 [2024-07-15 20:52:40.734758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.734774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.735055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.735070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.735339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.735356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.735586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.735617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.735909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.735940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.736221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.736262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.736454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.736485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.736651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.736682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.736923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.736954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.737189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.737221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.737448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.737480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.737706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.737737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.738062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.738078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.738350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.738366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.738488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.738504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.738711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.738741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.738976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.739007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.739267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.739298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.739556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.739587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.739764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.739796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.740053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.740085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.740388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.740421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.740595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.740611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.740748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.740779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.740941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.740971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.741146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.741176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.741358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.741391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.741704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.741719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.742025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.742040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.742233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.742249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.742407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.742422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.742698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.742728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.743013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.743045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.345 [2024-07-15 20:52:40.743377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.345 [2024-07-15 20:52:40.743409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.345 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.743631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.743662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.743891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.743921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.744248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.744280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.744581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.744612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.744855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.744886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.745117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.745148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.745433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.745464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.745783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.745813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.746132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.746163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.746411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.746444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.746721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.746751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.747011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.747042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.747349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.747387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.747662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.747693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.747924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.747955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.748243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.748260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.748466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.748481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.748630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.748646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.748786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.748817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.749142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.749173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.749472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.749505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.749786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.749816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.750067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.750099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.750470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.750502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.750732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.750764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.751013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.751044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.751303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.751336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.751587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.751618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.751953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.751983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.752218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.752257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.752498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.752530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.752715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.752730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.753004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.753035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.753204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.753242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.346 qpair failed and we were unable to recover it. 00:27:06.346 [2024-07-15 20:52:40.753479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.346 [2024-07-15 20:52:40.753511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.753733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.753748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.754017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.754033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.754247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.754264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.754464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.754479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.754694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.754712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.754930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.754961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.755187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.755218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.755463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.755495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.755669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.755685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.755880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.755912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.756155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.756186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.756498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.756531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.756710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.756742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.757065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.757097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.757318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.757350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.757589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.757620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.757936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.757952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.758159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.758174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.758456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.758473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.758760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.758792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.759132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.759163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.759367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.759400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.759638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.759669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.759947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.759979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.760237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.760269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.760499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.760530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.760814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.760845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.761159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.347 [2024-07-15 20:52:40.761190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.347 qpair failed and we were unable to recover it. 00:27:06.347 [2024-07-15 20:52:40.761480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.761513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.761776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.761809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.762162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.762193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.762415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.762452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.762702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.762733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.762989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.763020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.763306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.763338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.763562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.763593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.763821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.763853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.764074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.764106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.764409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.764442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.764677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.764707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.764966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.764997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.765307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.765339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.765509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.765540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.765778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.765810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.766046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.766077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.766481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.766550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.766762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.766797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.766976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.767008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.767325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.767358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.767548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.767580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.767804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.767835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.768059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.768090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.768272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.768318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.768488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.768519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.768723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.768754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.769044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.769060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.769340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.769373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.769654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.769686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.769876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.769896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.770164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.770180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.770391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.770423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.348 [2024-07-15 20:52:40.770639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.348 [2024-07-15 20:52:40.770670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.348 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.770907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.770938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.771245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.771277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.771524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.771555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.771793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.771824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.771987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.772018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.772290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.772322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.772508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.772540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.772765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.772796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.773020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.773051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.773291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.773323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.773592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.773624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.773857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.773891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.774159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.774175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.774369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.774384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.774619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.774634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.774910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.774926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.775125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.775170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.775407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.775439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.775737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.775769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.776083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.776099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.776397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.776413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.776547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.776562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.776748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.776764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.776978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.777010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.777174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.777205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.777533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.777565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.777820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.777851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.778077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.778107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.778329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.778361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.778595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.778627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.778799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.778830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.779112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.779144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.349 [2024-07-15 20:52:40.779389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.349 [2024-07-15 20:52:40.779421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.349 qpair failed and we were unable to recover it. 00:27:06.350 [2024-07-15 20:52:40.779659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.350 [2024-07-15 20:52:40.779691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.350 qpair failed and we were unable to recover it. 00:27:06.350 [2024-07-15 20:52:40.780017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.350 [2024-07-15 20:52:40.780048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.350 qpair failed and we were unable to recover it. 00:27:06.350 [2024-07-15 20:52:40.780328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.350 [2024-07-15 20:52:40.780362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.350 qpair failed and we were unable to recover it. 00:27:06.350 [2024-07-15 20:52:40.780603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.350 [2024-07-15 20:52:40.780639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.350 qpair failed and we were unable to recover it. 00:27:06.350 [2024-07-15 20:52:40.780885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.350 [2024-07-15 20:52:40.780915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.350 qpair failed and we were unable to recover it. 00:27:06.350 [2024-07-15 20:52:40.781211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.350 [2024-07-15 20:52:40.781251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.350 qpair failed and we were unable to recover it. 00:27:06.350 [2024-07-15 20:52:40.781502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.350 [2024-07-15 20:52:40.781534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.350 qpair failed and we were unable to recover it. 00:27:06.350 [2024-07-15 20:52:40.781816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.781847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.782154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.782188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.782460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.782491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.782759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.782790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.783022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.783038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.783267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.783284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.783441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.783456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.783642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.783658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.783841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.783857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.783995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.784012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.784238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.784255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.784438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.784454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.784584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.784615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.784853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.784884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.785195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.785249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.785535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.785567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.785736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.785767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.785999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.786030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.786297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.786329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.786581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.786612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.786832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.786848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.787080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.787096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.787321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.787337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.787493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.787509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.787693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.787709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.787932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.787962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.788199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.788240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.788485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.788515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.788752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.788784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.789062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.789078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.789353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.789369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.789569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.789585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.789783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.789798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.789923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.789953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.790203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.790241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.626 qpair failed and we were unable to recover it. 00:27:06.626 [2024-07-15 20:52:40.790440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.626 [2024-07-15 20:52:40.790471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.790638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.790689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.791006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.791038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.791369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.791401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.791648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.791679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.791913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.791953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.792149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.792165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.792421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.792452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.792715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.792746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.793007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.793022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.793252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.793268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.793409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.793425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.793583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.793615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.793829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.793861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.794144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.794175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.794474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.794506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.794682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.794713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.794940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.794956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.795166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.795182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.795466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.795482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.795663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.795679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.795983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.796013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.796297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.796329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.796615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.796646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.796883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.796914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.797079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.797111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.797372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.797404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.797576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.797608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.797836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.797869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.798035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.798051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.798274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.798306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.798595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.798626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.798853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.798884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.799187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.799219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.799540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.799571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.799825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.799856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.800068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.800084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.800281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.800314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.800544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.800574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.800811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.800842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.801024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.627 [2024-07-15 20:52:40.801041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.627 qpair failed and we were unable to recover it. 00:27:06.627 [2024-07-15 20:52:40.801243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.801280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.801510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.801541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.801777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.801808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.802152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.802185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.802451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.802483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.802660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.802691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.802874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.802890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.803092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.803123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.803352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.803384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.803622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.803652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.803882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.803913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.804130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.804146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.804400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.804417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.804599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.804615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.804816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.804832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.805055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.805071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.805351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.805368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.805524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.805539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.805828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.805859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.806168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.806200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.806503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.806536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.806818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.806849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.807101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.807133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.807359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.807391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.807654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.807686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.807906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.807938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.808254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.808270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.808478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.808494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.808628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.808643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.808930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.808961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.809130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.809161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.809386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.809418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.809704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.809736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.810010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.810026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.810320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.810353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.810588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.810619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.810800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.810831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.811159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.811190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.811453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.811485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.811722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.811752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.628 [2024-07-15 20:52:40.811929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.628 [2024-07-15 20:52:40.811965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.628 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.812145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.812184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.812430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.812462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.812648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.812680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.812865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.812896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.813155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.813186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.813436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.813468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.813658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.813690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.814043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.814075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.814319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.814351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.814611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.814642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.814865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.814897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.815140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.815171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.815430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.815462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.815683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.815715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.816025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.816057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.816394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.816426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.816740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.816772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.817083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.817113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.817291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.817323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.817523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.817554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.817774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.817804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.818109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.818140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.818377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.818410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.818590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.818621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.818853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.818884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.819189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.819220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.819474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.819511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.819798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.819814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.820088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.820104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.820312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.820328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.820532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.820548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.820691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.820708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.820899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.820915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.821176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.821218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.821410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.821441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.821613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.821644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.821801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.821816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.822140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.822157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.822302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.822318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.822579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.822610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.629 [2024-07-15 20:52:40.822798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.629 [2024-07-15 20:52:40.822830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.629 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.823060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.823091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.823411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.823443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.823597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.823628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.823807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.823839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.824147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.824163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.824425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.824441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.824638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.824655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.824855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.824871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.825091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.825106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.825227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.825245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.825401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.825432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.825656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.825687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.825880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.825912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.826185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.826223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.826566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.826598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.826814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.826845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.827153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.827183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.827435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.827468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.827754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.827785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.827947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.827980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.828196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.828236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.828427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.828459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.828697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.828728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.828964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.828979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.829168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.829184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.829392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.829411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.829569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.829600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.829774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.829805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.830040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.830071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.830298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.830316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.831251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.831281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.831506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.831521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.831692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.630 [2024-07-15 20:52:40.831723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.630 qpair failed and we were unable to recover it. 00:27:06.630 [2024-07-15 20:52:40.831975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.832016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.832310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.832343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.832586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.832625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.832919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.832950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.833130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.833162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.833412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.833444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.833684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.833716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.835000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.835032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.835306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.835325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.835481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.835499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.835703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.835720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.835867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.835883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.836148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.836183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.836439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.836472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.836724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.836756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.836935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.836952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.837185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.837217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.837420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.837451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.837763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.837794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.838043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.838075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.838249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.838265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.838476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.838508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.838716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.838747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.839005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.839037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.839892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.839934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.840275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.840293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.840452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.840471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.840689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.840705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.840868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.840883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.841170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.841186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.841430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.841447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.841602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.841618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.841812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.841833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.842163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.842179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.842365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.842383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.842568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.842584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.842780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.842796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.842931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.842946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.843204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.843220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.843525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.843542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.843799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.631 [2024-07-15 20:52:40.843815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.631 qpair failed and we were unable to recover it. 00:27:06.631 [2024-07-15 20:52:40.843938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.843953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.844171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.844188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.844433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.844450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.844698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.844713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.844900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.844916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.845049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.845065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.845361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.845395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.845594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.845626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.845896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.845927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.846246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.846263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.846571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.846588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.846728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.846745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.846999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.847015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.847181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.847198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.847426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.847442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.847634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.847651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.847890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.847906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.848222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.848243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.848445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.848461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.848660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.848677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.848900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.848916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.849141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.849158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.849454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.849470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.849698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.849730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.850070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.850102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.850389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.850424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.850746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.850778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.851106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.851138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.851384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.851416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.851607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.851637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.851951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.851983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.852249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.852288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.852534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.852566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.852843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.852875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.853175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.853207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.853468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.853500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.853671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.853702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.853989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.854005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.854293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.854325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.854564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.854595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.854901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.854932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.632 qpair failed and we were unable to recover it. 00:27:06.632 [2024-07-15 20:52:40.855235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.632 [2024-07-15 20:52:40.855268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.855499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.855532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.855748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.855779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.856026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.856042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.856211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.856233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.856490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.856506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.856625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.856642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.856767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.856783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.856977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.856993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.857174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.857190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.857351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.857368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.857709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.857740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.858029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.858061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.858390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.858423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.858641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.858673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.858856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.858887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.859189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.859221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.859409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.859441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.859635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.859667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.859989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.860022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.860364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.860397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.860687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.860719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.860943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.860975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.861264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.861297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.861481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.861514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.861827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.861859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.862093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.862124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.862404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.862436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.862657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.862689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.863018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.863059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.863256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.863276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.863407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.863422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.863570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.863587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.863770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.863786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.633 [2024-07-15 20:52:40.863986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.633 [2024-07-15 20:52:40.864002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.633 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.864190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.864205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.864490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.864506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.864721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.864737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.864954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.864985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.865218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.865257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.865542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.865575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.865799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.865830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.866126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.866158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.866374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.866408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.866628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.866660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.866889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.866921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.867264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.867298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.867633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.867665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.867843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.867874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.868174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.868205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.868484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.868516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.868824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.868856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.869034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.869066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.869314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.869346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.869689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.869720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.870025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.870056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.870275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.870307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.870484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.870516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.870744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.870776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.871026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.871041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.871243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.871259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.871447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.871478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.871701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.871733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.871974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.872005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.872309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.872356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.872580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.872612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.872902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.872934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.873216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.873237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.873397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.873414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.634 [2024-07-15 20:52:40.873694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.634 [2024-07-15 20:52:40.873710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.634 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.873944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.873992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.874254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.874287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.874526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.874557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.874818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.874834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.875043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.875075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.875319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.875352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.875608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.875639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.875863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.875894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.876204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.876245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.876583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.876615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.876780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.876813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.877151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.877182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.877480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.877513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.877747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.877778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.877996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.878012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.878214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.878238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.878497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.878513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.878766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.878781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.879037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.879053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.879347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.879379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.879720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.879751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.879918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.879949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.880168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.880208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.880487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.880503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.880711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.880727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.880935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.880951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.881274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.881306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.881571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.881602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.881863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.881895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.882113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.882153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.882365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.882381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.882530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.882546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.882705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.882736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.883051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.883082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.883242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.883259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.883493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.883509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.883732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.883763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.884006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.884038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.884243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.884276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.884564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.884597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.635 [2024-07-15 20:52:40.884767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.635 [2024-07-15 20:52:40.884812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.635 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.884930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.884945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.885070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.885115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.885357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.885390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.885626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.885658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.885882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.885913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.886245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.886277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.886508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.886540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.886830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.886861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.887048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.887080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.887259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.887276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.887542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.887574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.887822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.887852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.888178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.888210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.888467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.888500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.888813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.888844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.889158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.889190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.889479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.889512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.889828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.889860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.890017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.890049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.890344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.890378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.890682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.890713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.891017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.891049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.891358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.891391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.891641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.891673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.891956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.891987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.892207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.892245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.892526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.892543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.892770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.892787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.892914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.892930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.893136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.893167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.893373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.893406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.893726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.893770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.893972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.893989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.894232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.894249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.894504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.894543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.894782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.894815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.895078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.895109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.895351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.895384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.895544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.895576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.895883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.895919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.636 [2024-07-15 20:52:40.896157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.636 [2024-07-15 20:52:40.896189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.636 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.896536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.896569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.896725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.896756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.897084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.897116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.897328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.897345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.897678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.897709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.897951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.897984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.898139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.898172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.898483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.898516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.898736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.898768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.898993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.899025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.899304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.899337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.899568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.899600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.899907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.899939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.900261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.900296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.900618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.900650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.900964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.900996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.901318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.901351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.901587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.901620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.901847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.901879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.902196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.902237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.902476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.902509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.902679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.902711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.902988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.903030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.903238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.903257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.903482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.903499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.903707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.903740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.904059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.904076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.904369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.904386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.904606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.904623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.904900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.904932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.905239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.905272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.905594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.905627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.905947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.905979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.906148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.906165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.906380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.906397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.906657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.906673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.906968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.906999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.907272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.907306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.907567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.907605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.637 [2024-07-15 20:52:40.907783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.637 [2024-07-15 20:52:40.907815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.637 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.908036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.908052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.908310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.908327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.908611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.908627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.908832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.908849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.909051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.909067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.909259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.909276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.909413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.909429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.909712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.909744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.909982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.910014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.910261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.910278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.910476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.910493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.910751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.910768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.910977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.910995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.911186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.911202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.911493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.911510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.911797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.911814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.912018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.912051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.912349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.912382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.912688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.912720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.912959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.912992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.913319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.913352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.913584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.913617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.913844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.913877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.914129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.914161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.914389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.914422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.914790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.914867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.915139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.915154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.915454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.915469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.915678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.915710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.916066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.916098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.916305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.916320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.916605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.916637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.916863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.916895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.917051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.917064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.917320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.917353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.917677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.917709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.917953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.917984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.918141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.918173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.918388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.918407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.918610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.638 [2024-07-15 20:52:40.918642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.638 qpair failed and we were unable to recover it. 00:27:06.638 [2024-07-15 20:52:40.918964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.918997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.919164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.919197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.919533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.919566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.919737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.919768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.920086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.920118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.920344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.920377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.920648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.920680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.920979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.921011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.921193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.921235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.921531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.921563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.921821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.921852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.922113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.922126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.922405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.922439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.922603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.922635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.922863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.922894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.923186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.923200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.923451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.923464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.923605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.923618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.923825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.923856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.924033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.924046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.924314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.924347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.924521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.924552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.924859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.924891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.925186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.925199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.925412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.925426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.925633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.925647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.925925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.925938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.926207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.926247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.926611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.926643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.926963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.926995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.927289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.927321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.927647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.927679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.927997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.928027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.928329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.928361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.639 [2024-07-15 20:52:40.928677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.639 [2024-07-15 20:52:40.928709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.639 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.929003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.929035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.929353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.929387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.929670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.929702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.929947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.929984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.930283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.930316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.930568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.930599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.930960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.930991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.931214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.931256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.931514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.931545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.931770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.931803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.932056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.932088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.932315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.932349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.932513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.932546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.932786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.932819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.933145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.933157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.933292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.933306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.933587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.933620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.933942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.933975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.934221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.934264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.934560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.934593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.934857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.934889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.935171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.935203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.935475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.935508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.935756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.935788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.936132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.936165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.936492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.936524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.936802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.936834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.937000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.937032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.937368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.937382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.937596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.937627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.937977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.938009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.938239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.938253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.938456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.938488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.938731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.938763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.939045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.939077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.939303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.939337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.939629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.939661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.940007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.940038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.940280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.940319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.940596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.940609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.640 qpair failed and we were unable to recover it. 00:27:06.640 [2024-07-15 20:52:40.940793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.640 [2024-07-15 20:52:40.940806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.940941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.940954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.941246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.941278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.941581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.941624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.941919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.941950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.942264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.942298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.942467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.942500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.942741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.942773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.943071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.943103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.943329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.943361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.943670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.943702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.943996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.944028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.944274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.944307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.944526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.944540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.944663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.944676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.944936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.944968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.945290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.945323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.945507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.945521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.945703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.945716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.945830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.945866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.946148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.946180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.946509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.946542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.946865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.946896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.947109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.947135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.947408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.947422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.947626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.947639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.947914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.947926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.948130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.948143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.948352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.948365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.948646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.948678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.949018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.949051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.949388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.949421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.949674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.949706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.949975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.950016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.950214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.950232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.950347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.950360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.950501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.950515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.950765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.950780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.950976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.950990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.951240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.951270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.951481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.951494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.641 qpair failed and we were unable to recover it. 00:27:06.641 [2024-07-15 20:52:40.951750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.641 [2024-07-15 20:52:40.951782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.952026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.952059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.952259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.952296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.952616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.952649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.952917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.952950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.953296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.953329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.953626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.953658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.953974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.954005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.954326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.954359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.954680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.954712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.955030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.955061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.955377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.955411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.955745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.955777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.956024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.956056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.956379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.956411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.956657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.956689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.957042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.957074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.957330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.957363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.957619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.957652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.957959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.957990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.958323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.958356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.958650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.958682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.958978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.959010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.959242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.959273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.959512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.959525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.959815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.959846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.960187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.960219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.960495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.960508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.960794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.960826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.961066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.961104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.961421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.961454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.961768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.961800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.962036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.962068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.962309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.962342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.962618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.962649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.962903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.962935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.963254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.963287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.963534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.963566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.642 qpair failed and we were unable to recover it. 00:27:06.642 [2024-07-15 20:52:40.963859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.642 [2024-07-15 20:52:40.963891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.964214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.964264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.964482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.964507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.964711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.964724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.964900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.964913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.965173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.965204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.965478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.965511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.965737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.965769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.965952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.965983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.966239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.966272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.966625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.966657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.966948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.966980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.967300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.967333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.967645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.967677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.967910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.967942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.968195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.968208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.968435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.968448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.968707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.968719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.968934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.968947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.969158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.969171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.969458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.969492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.969809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.969841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.970008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.970041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.970333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.970366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.970539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.970570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.970816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.970848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.971075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.971106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.971335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.971368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.971617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.971630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.971974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.972006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.972338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.972370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.972659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.972675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.972907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.972939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.973119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.973133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.643 qpair failed and we were unable to recover it. 00:27:06.643 [2024-07-15 20:52:40.973350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.643 [2024-07-15 20:52:40.973383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.973618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.973649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.973967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.973999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.974299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.974332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.974586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.974599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.974881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.974894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.975183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.975216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.975495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.975528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.975767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.975800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.976036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.976068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.976384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.976419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.976726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.976758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.976988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.977020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.977383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.977398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.977573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.977606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.977940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.977972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.978207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.978221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.978444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.978459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.978660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.978692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.978920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.978952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.979259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.979292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.979565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.979597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.979937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.979976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.980103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.980117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.980366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.980379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.980613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.980625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.980848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.980861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.981072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.981086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.981307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.981321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.981546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.981559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.981833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.981845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.982152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.982183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.982457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.982491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.982746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.982778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.983018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.983050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.983313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.983347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.983592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.983625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.983919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.983956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.984274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.984308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.984566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.984597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.984788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.644 [2024-07-15 20:52:40.984820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.644 qpair failed and we were unable to recover it. 00:27:06.644 [2024-07-15 20:52:40.985144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.985176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.985433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.985465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.985696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.985728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.986056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.986088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.986425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.986458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.986725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.986757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.987009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.987040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.987342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.987376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.987715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.987747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.988081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.988114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.988436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.988470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.988796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.988828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.989067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.989099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.989347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.989379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.989604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.989636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.989955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.989987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.990289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.990322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.990571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.990603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.990760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.990793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.991111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.991142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.991439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.991472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.991794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.991826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.992140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.992172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.992452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.992486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.992755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.992788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.993118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.993149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.993475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.993510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.993775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.993808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.994020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.994053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.994223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.994274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.994518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.994549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.994846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.994879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.995106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.995139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.995442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.995456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.995660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.995673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.995960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.995992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.996256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.996295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.996561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.996593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.996841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.996873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.997193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.997236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.645 qpair failed and we were unable to recover it. 00:27:06.645 [2024-07-15 20:52:40.997461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.645 [2024-07-15 20:52:40.997493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:40.997781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:40.997813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:40.998138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:40.998176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:40.998463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:40.998497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:40.998822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:40.998854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:40.999044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:40.999076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:40.999261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:40.999276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:40.999477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:40.999510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:40.999760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:40.999792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.000039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.000073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.000294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.000308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.000440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.000472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.000697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.000728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.000963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.000996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.001260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.001293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.001460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.001493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.001723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.001755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.002076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.002108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.002347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.002382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.002683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.002696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.002882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.002915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.003244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.003277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.003510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.003523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.003707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.003740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.004036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.004068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.004389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.004422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.004620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.004654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.004977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.005019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.005287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.005325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.005499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.005532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.005856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.005889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.006209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.006266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.006516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.006550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.006798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.006831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.007164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.007196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.007384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.007418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.007730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.007769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.007997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.008029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.008192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.008238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.008413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.008445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.008692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.646 [2024-07-15 20:52:41.008725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.646 qpair failed and we were unable to recover it. 00:27:06.646 [2024-07-15 20:52:41.008926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.008963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.009295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.009309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.009510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.009555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.009852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.009885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.010204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.010258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.010508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.010541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.010861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.010893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.011139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.011172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.011411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.011445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.011695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.011708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.011906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.011919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.012204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.012247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.012572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.012605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.012792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.012825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.013126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.013159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.013365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.013380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.013576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.013608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.013855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.013887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.014039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.014071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.014393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.014426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.014667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.014698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.014940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.014972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.015200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.015243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.015399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.015432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.015556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.015570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.015766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.015799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.016095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.016128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.016437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.016450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.016720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.016752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.016919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.016951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.017200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.017243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.017608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.017639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.017934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.017966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.018282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.018295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.018576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.018607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.647 [2024-07-15 20:52:41.018861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.647 [2024-07-15 20:52:41.018898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.647 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.019119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.019151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.019384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.019397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.019600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.019613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.019818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.019830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.019975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.019988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.020192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.020205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.020425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.020457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.020707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.020747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.021058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.021093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.021310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.021325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.021423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.021435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.021643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.021677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.021931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.021967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.022267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.022304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.022618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.022653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.022838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.022873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.023130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.023144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.023360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.023395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.023591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.023625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.023886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.023920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.024149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.024183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.024373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.024415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.024665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.024678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.024875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.024888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.025136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.025149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.025267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.025280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.025470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.025484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.025670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.025704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.025927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.025961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.026201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.026252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.026418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.026456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.026761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.026795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.027093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.027127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.027423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.027464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.027762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.027796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.027985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.028019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.028332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.028346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.028631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.028672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.028988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.029023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.029274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.029315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.648 qpair failed and we were unable to recover it. 00:27:06.648 [2024-07-15 20:52:41.029630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.648 [2024-07-15 20:52:41.029667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.029910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.029944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.030101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.030135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.030482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.030517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.030829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.030865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.031045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.031079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.031321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.031356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.031514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.031527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.031773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.031787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.031995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.032029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.032288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.032322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.032551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.032565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.032764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.032798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.033033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.033067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.033314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.033348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.033662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.033700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.033991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.034025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.034358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.034392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.034629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.034643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.034853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.034892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.035036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.035068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.035313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.035348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.035534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.035548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.035792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.035806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.036056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.036102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.036282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.036316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.036633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.036674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.036940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.036953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.037216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.037268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.037432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.037449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.037572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.649 [2024-07-15 20:52:41.037597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.649 qpair failed and we were unable to recover it. 00:27:06.649 [2024-07-15 20:52:41.037813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.037846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.038089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.038125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.038346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.038382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.038574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.038608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.038849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.038885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.039272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.039286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.039469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.039485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.039706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.039720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.039990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.040006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.040308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.040342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.040674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.040707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.041016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.041029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.041267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.041281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.041539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.041552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.041827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.041841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.042102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.042137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.042334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.042369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.042596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.042628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.042857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.042889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.043055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.043087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.043399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.043435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.043599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.043633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.043786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.043820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.044111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.044146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.044467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.044510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.044754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.044789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.045128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.045162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.045478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.650 [2024-07-15 20:52:41.045513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.650 qpair failed and we were unable to recover it. 00:27:06.650 [2024-07-15 20:52:41.045807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.045842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.046148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.046182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.046523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.046538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.046821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.046853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.047124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.047156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.047386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.047420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.047725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.047757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.048171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.048274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.048597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.048635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.048954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.048986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.049261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.049296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.049554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.049570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.049861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.049893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.050210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.050252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.050483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.050516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.050830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.050862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.051105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.051137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.051403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.051445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.051653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.051670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.051877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.051894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.052099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.052140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.052475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.052509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.052756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.052773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.053023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.053061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.053289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.053322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.053614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.053646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.053874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.053905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.054196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.054240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.054559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.054591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.054881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.054914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.055168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.055201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.055510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.055543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.055780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.055812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.056045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.056077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.056396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.056429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.056768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.056801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.057067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.057099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.057440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.651 [2024-07-15 20:52:41.057474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.651 qpair failed and we were unable to recover it. 00:27:06.651 [2024-07-15 20:52:41.057783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.057815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.058000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.058032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.058276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.058310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.058544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.058560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.058862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.058893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.059159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.059191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.059449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.059482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.059718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.059750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.059990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.060022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.060408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.060482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.060796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.060831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.061080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.061112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.061472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.061505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.061727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.061759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.062094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.062125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.062376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.062408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.062675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.062706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.062888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.062920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.063241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.063274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.063595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.063626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.063851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.063883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.064190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.064221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.064541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.064582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.064776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.064792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.065053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.065084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.065415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.065448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.065625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.065642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.065833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.065849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.066135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.066167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.066481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.066514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.066808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.066840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.067160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.067192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.067482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.067520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.067835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.067866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.652 [2024-07-15 20:52:41.068156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.652 [2024-07-15 20:52:41.068187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.652 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.068428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.068461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.068709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.068741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.068999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.069032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.069351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.069385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.069559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.069591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.069909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.069941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.070242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.070275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.070452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.070485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.070679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.070695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.070896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.070912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.071106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.071122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.071363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.071380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.071584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.071601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.071907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.071939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.072184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.072216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.072517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.072549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.072792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.072808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.073066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.073097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.073403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.073436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.073738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.073754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.074000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.074019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.074228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.074245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.074476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.074494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.074718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.074735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.074929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.074946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.075171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.075188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.075385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.075403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.075712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.075756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.076070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.076102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.076342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.076375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.076632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.076664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.076844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.076876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.077195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.077234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.077551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.077584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.077869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.077885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.078171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.078203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.078441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.078473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.078703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.078735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.078956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.078988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.079264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.079300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.653 [2024-07-15 20:52:41.079600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.653 [2024-07-15 20:52:41.079632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.653 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.079932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.079949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.080292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.080310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.080541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.080559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.080841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.080873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.081113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.081145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.081497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.081531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.081775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.081807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.082053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.082086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.082331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.082363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.082525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.082542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.082744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.082762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.082973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.082989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.083117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.083133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.083419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.083438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.083644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.083661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.083860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.083876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.084100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.084117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.084412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.084447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.084681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.084714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.085063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.085094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.085395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.085427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.085621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.085653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.085895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.085928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.086153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.086185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.086434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.086468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.086810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.086842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.087142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.087179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.087430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.087463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.087706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.087737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.087934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.087951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.088182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.088214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.088464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.088482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.088618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.088636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.088858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.088890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.089134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.089166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.089505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.089539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.089722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.089755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.090073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.090105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.090418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.090452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.090686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.090703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.654 qpair failed and we were unable to recover it. 00:27:06.654 [2024-07-15 20:52:41.091009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.654 [2024-07-15 20:52:41.091041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.655 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.091405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.091441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.091704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.091738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.092004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.092037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.092282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.092299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.092493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.092510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.092661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.092678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.092832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.092849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.093143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.093175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.093377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.093410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.093646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.093677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.093998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.094031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.094289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.094321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.094509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.094527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.094807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.094841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.095083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.095115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.095395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.095430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.095677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.095722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.095969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.095987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.096269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.096286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.940 [2024-07-15 20:52:41.096603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.940 [2024-07-15 20:52:41.096635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.940 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.096979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.097011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.097332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.097365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.097615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.097648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.097876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.097910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.098072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.098101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.098374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.098411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.098639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.098671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.098917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.098950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.099296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.099314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.099583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.099599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.099817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.099830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.100063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.100093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.100336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.100370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.100605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.100638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.100811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.100843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.101141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.101173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.101497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.101541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.101828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.101861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.102131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.102161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.102420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.102451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.102698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.102730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.102947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.102980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.103274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.103309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.103562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.103594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.103825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.103843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.103986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.104005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.104220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.104264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.104565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.104597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.104840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.104872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.105160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.105192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.105522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.105554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.105734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.105766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.106061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.106098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.106337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.106372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.106692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.106724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.106882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.106897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.107115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.107147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.107373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.107407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.107736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.107769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.108031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.108063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.108438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.108471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.108645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.108677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.108956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.108987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.109307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.109340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.109575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.109592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.109776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.109793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.110084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.110101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.110253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.110270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.110549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.110566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.110697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.110714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.110926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.110958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.111197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.111242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.111506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.111538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.111766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.111799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.112070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.112103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.112399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.112433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.112736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.112783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.941 [2024-07-15 20:52:41.113121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.941 [2024-07-15 20:52:41.113153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.941 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.113317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.113349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.113660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.113677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.113802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.113819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.114116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.114132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.114459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.114491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.114798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.114830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.115058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.115090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.115255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.115286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.115533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.115565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.115784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.115801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.116022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.116040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.116266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.116284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.116565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.116599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.116770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.116803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.117044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.117080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.117322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.117355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.117605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.117638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.117871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.117904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.118201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.118240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.118570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.118602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.118896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.118929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.119174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.119206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.120122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.120150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.120409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.120427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.120688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.120705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.120964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.120981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.121295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.121312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.121603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.121620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.121882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.121900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.122164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.122181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.122401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.122419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.122733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.122749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.122957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.122973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.123282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.123301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.123460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.123477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.123687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.123705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.123851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.123868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.124159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.124177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.124341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.124359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.124562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.124579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.124716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.124733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.124963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.124980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.125193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.125210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.125392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.125441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.125708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.125747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.125969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.125984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.126263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.126278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.126549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.126563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.126714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.126727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.126930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.126943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.127195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.127208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.127414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.127427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.127630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.127643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.127826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.127840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.128095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.128113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.128388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.128401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.128599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.942 [2024-07-15 20:52:41.128612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.942 qpair failed and we were unable to recover it. 00:27:06.942 [2024-07-15 20:52:41.128747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.128760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.128965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.128978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.129257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.129270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.129523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.129536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.129734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.129747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.129936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.129951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.130207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.130221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.130489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.130503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.130701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.130715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.130922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.130936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.131127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.131141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.131336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.131350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.131615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.131629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.131775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.131788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.132042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.132055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.132252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.132266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.132419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.132433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.132711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.132724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.132959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.132973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.133164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.133177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.133460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.133493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.133690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.133721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.134047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.134079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.134340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.134374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.134605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.134638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.134874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.134906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.135177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.135208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.135453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.135486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.135762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.135794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.136032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.136046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.136196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.136209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.136493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.136527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.136784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.136816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.943 [2024-07-15 20:52:41.137081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.943 [2024-07-15 20:52:41.137113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.943 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.137277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.137310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.137631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.137676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.137858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.137872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.138057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.138072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.138236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.138250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.138588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.138603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.138791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.138806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.138970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.139014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.139294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.139329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.139591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.139625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.139926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.139958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.140242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.140280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.140580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.140595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.140821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.140853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.141200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.141245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.141558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.141590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.141775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.141807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.142133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.142166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.142434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.142467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.142805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.142819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.143020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.143034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.143238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.143252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.143514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.143527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.944 qpair failed and we were unable to recover it. 00:27:06.944 [2024-07-15 20:52:41.143721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.944 [2024-07-15 20:52:41.143735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.143941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.143954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.144253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.144267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.144522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.144535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.144750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.144764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.144993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.145007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.145169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.145182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.145420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.145433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.145714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.145727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.145920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.145933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.146065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.146079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.146251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.146265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.146471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.146484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.146663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.146677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.146959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.146972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.147109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.147122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.147388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.147401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.147580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.147593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.147919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.147932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.148201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.148213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.148418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.148435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.148571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.148584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.148863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.148875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.149080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.149093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.149290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.149304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.149565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.149578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.149785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.149799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.150090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.150103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.150253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.150266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.150570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.150583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.150901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.150914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.151094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.151107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.151356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.151369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.151523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.151536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.151735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.151748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.151927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.151941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.152206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.152219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.152505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.152519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.152661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.152674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.152905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.152918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.153115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.153129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.153409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.153422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.153673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.153687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.153930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.153943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.154139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.154153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.154428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.154442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.154644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.154656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.154947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.154961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.155145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.155159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.155353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.155367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.155662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.155675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.155873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.155886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.156082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.156094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.156217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.156241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.156377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.156390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.156618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.156631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.156915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.156928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.157197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.945 [2024-07-15 20:52:41.157210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.945 qpair failed and we were unable to recover it. 00:27:06.945 [2024-07-15 20:52:41.157431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.157445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.157720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.157732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.157965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.157980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.158182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.158196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.158473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.158487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.158682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.158695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.158908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.158921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.159135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.159148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.159343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.159357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.159564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.159577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.159724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.159736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.160021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.160034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.160316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.160329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.160636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.160649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.160923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.160936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.161188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.161201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.161420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.161434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.161567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.161580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.161784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.161797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.162019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.162032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.162231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.162245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.162447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.162460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.162666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.162679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.162979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.162992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.163179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.163192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.163385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.163398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.163602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.163615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.163729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.163742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.163927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.163941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.164214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.164238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.164433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.164446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.164725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.164738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.165000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.165014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.165295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.165308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.165485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.165498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.165783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.165797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.166106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.166119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.166394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.166407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.166539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.166553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.166755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.166768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.167042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.167055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.167253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.167267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.167559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.167574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.167824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.167837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.168084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.168097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.168366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.168380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.168634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.168647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.168827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.168841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.169043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.169056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.169192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.169206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.169433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.169447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.169719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.169732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.169909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.169923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.170196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.170209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.170436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.170450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.170700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.170713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.171012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.171026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.946 [2024-07-15 20:52:41.171148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.946 [2024-07-15 20:52:41.171160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.946 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.171413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.171426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.171626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.171639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.171901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.171914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.172209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.172221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.172430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.172444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.172593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.172607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.172895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.172908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.173187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.173200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.173487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.173501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.173769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.173781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.174077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.174089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.174293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.174307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.174420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.174432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.174705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.174718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.174995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.175008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.175288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.175302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.175509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.175522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.175722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.175736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.175965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.175977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.176234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.176247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.176354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.176368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.176607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.176620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.176799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.176811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.177070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.177083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.177367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.177383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.177609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.177621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.177804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.177817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.178031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.178043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.178220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.178239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.178458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.178471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.178645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.178658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.178873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.178886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.179002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.179015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.179154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.179167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.179393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.179405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.179584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.179597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.179870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.179883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.180071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.180084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.180337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.180351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.180552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.180564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.180763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.180777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.180978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.180991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.181216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.181233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.181357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.181369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.181569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.181582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.181711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.181723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.181851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.181864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.181974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.181986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.182181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.182194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.182384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.182397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.182528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.182540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.947 [2024-07-15 20:52:41.182730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.947 [2024-07-15 20:52:41.182743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.947 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.182946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.182959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.183168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.183181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.183380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.183394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.183595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.183608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.183879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.183891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.184139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.184152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.184452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.184465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.184676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.184689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.184920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.184933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.185040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.185051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.185249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.185263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.185460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.185473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.185702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.185717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.185961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.185974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.186162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.186174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.186373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.186386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.186695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.186707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.186910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.186923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.187106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.187120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.187365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.187378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.187623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.187635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.187810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.187822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.187934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.187948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.188216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.188234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.188435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.188447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.188643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.188656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.188843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.188856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.188967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.188980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.189199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.189212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.189515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.189528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.189660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.189673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.189864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.189877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.189985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.189998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.190243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.190257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.190449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.190462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.190636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.190649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.190910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.190923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.191133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.191146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.191347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.191361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.191570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.191584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.191848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.191861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.192132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.192145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.192429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.192444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.192707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.192720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.192982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.192995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.193263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.193276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.193455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.193467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.193691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.193704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.193947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.193960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.194153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.194166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.194386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.194399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.194659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.194671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.194863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.194878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.195089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.195102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.195377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.195390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.195666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.195679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.195877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.195890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.948 [2024-07-15 20:52:41.196083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.948 [2024-07-15 20:52:41.196096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.948 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.196301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.196314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.196556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.196568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.196836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.196848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.197024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.197036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.197219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.197237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.197358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.197370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.197638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.197650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.197909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.197921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.198164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.198177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.198308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.198321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.198492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.198504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.198749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.198761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.198933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.198945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.199230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.199243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.199570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.199583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.199863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.199876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.200165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.200178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.200371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.200385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.200527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.200538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.200802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.200814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.200996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.201009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.201233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.201247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.201363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.201376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.201639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.201652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.201840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.201852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.202115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.202127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.202318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.202331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.202624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.202637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.202878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.202891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.203085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.203097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.203283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.203296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.203560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.203573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.203768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.203781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.203958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.203971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.204262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.204277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.204482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.204495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.204758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.204770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.205044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.205056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.205347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.205360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.205589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.205602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.205796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.205808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.206048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.206060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.206279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.206292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.206552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.206564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.206829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.206841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.207011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.207024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.207219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.207236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.207531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.207543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.207753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.207766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.208024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.208036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.208242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.208255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.208538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.208551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.208750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.208763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.209052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.209065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.209238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.209251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.209445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.209457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.209653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.209665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.949 [2024-07-15 20:52:41.209960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.949 [2024-07-15 20:52:41.209972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.949 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.210165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.210178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.210470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.210482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.210598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.210609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.210848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.211221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.211264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.211619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.211657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.211976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.211990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.212281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.212293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.212555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.212566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.212753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.212766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.212983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.212995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.213193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.213205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.213457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.213470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.213675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.213686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.213947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.213960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.214145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.214157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.214402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.214417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.214613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.214626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.214829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.214841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.215017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.215029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.215245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.215259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.215439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.215451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.215655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.215668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.215920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.215932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.216216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.216233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.216477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.216490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.216625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.216637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.216889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.216901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.217087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.217100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.217339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.217352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.217538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.217551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.217742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.217754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.217962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.217975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.218152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.218164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.218285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.218297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.218558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.218569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.218774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.218786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.219037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.219048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.219336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.219349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.219602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.219614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.219787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.219801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.220064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.220077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.220340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.220352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.220562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.220576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.220748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.220759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.220940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.220953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.221216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.221233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.221487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.221500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.221689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.221702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.221967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.221979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.222236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.222249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.950 [2024-07-15 20:52:41.222537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.950 [2024-07-15 20:52:41.222549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.950 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.222758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.222769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.223008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.223020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.223206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.223218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.223431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.223443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.223706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.223717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.223839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.223853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.224135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.224147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.224432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.224444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.224618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.224630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.224829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.224841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.225050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.225062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.225254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.225266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.225463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.225475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.225660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.225672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.225953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.225965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.226167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.226179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.226445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.226458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.226667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.226678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.226871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.226884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.227142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.227154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.227369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.227381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.227666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.227678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.227965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.227976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.228164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.228176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.228446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.228459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.228747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.228758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.229026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.229039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.229239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.229252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.229522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.229534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.229724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.229736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.230027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.230038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.230212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.230231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.230524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.230535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.230656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.230669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.230858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.230870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.231061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.231073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.231249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.231262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.231446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.231458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.231704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.231715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.231893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.231906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.232145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.232157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.232398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.232410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.232601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.232613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.232873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.232885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.233146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.233159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.233373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.233386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.233667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.233679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.233803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.233814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.234028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.234040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.234336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.234348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.234532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.234544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.234806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.234818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.235032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.235043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.235282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.235295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.235478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.235490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.235751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.235764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.236002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.236013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.951 [2024-07-15 20:52:41.236275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.951 [2024-07-15 20:52:41.236289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.951 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.236475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.236487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.236772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.236784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.237046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.237058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.237243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.237256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.237429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.237442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.237702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.237715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.237979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.237991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.238277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.238289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.238495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.238507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.238768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.238780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.239028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.239041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.239287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.239300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.239568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.239580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.239753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.239767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.240004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.240016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.240131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.240142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.240335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.240348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.240535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.240547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.240795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.240808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.241043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.241055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.241318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.241331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.241584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.241596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.241833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.241845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.241968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.241980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.242155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.242168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.242428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.242441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.242634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.242646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.242898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.242910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.243025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.243036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.243273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.243286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.243559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.243572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.243760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.243771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.243973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.243985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.244247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.244259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.244462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.244473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.244603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.244615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.244854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.244866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.245127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.245139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.245250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.245262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.245521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.245534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.245794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.245805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.245987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.245999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.246116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.246128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.246265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.246278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.246471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.246483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.246653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.246665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.246822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.246835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.247071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.247083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.247322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.247335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.247540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.247552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.247723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.247734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.247906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.247919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.248177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.248188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.248426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.248440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.248698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.248709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.248826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.248837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.249006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.249017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.952 [2024-07-15 20:52:41.249252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.952 [2024-07-15 20:52:41.249265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.952 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.249446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.249457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.249626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.249639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.249903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.249915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.250112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.250125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.250329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.250341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.250514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.250525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.250784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.250797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.251058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.251070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.251261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.251274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.251460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.251472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.251732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.251744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.251942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.251954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.252139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.252150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.252332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.252345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.252520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.252533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.252795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.252808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.252932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.252944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.253183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.253195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.253366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.253378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.253567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.253579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.253852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.253864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.254131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.254143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.254384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.254396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.254658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.254669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.254925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.254937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.255121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.255132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.255326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.255339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.255506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.255518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.255707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.255719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.255983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.255996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.256129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.256141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.256350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.256361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.256597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.256609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.256864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.256876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.257138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.257150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.257346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.257361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.257531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.257542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.257731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.257743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.257979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.257991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.258174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.258186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.258450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.258462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.258648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.258659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.258855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.258867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.259119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.259131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.259389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.259401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.259576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.259587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.259712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.259727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.259916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.259926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.260177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.260189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.260469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.260481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.260670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.260682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.260924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.953 [2024-07-15 20:52:41.260935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.953 qpair failed and we were unable to recover it. 00:27:06.953 [2024-07-15 20:52:41.261136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.261147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.261342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.261354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.261542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.261554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.261719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.261731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.261964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.261975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.262139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.262150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.262353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.262366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.262632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.262643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.262916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.262928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.263094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.263106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.263294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.263307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.263515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.263528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.263696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.263708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.263886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.263899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.264162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.264173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.264303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.264315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.264484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.264496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.264675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.264686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.264890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.264902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.265099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.265111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.265386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.265398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.265669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.265681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.265916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.265928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.266193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.266207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.266388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.266400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.266662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.266674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.266933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.266945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.267247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.267260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.267449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.267461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.267651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.267663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.267842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.267853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.268115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.268127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.268344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.268357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.268544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.268556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.268740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.268752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.268871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.268884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.269052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.269065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.269278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.269290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.269535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.269546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.269751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.269763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.270049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.270061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.270233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.270245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.270402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.270414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.270674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.270685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.270947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.270959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.271162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.271175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.271337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.271349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.271616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.271628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.271751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.271762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.271998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.272010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.272134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.272147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.272338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.272350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.272536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.272548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.272653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.272663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.272848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.272860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.273123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.273135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.954 qpair failed and we were unable to recover it. 00:27:06.954 [2024-07-15 20:52:41.273379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.954 [2024-07-15 20:52:41.273391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.273636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.273647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.273862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.273873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.274074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.274086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.274292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.274304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.274490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.274502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.274712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.274724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.274965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.274978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.275232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.275244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.275451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.275462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.275668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.275679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.275812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.275823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.276011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.276024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.276281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.276293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.276550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.276562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.276730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.276742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.277001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.277013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.277222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.277238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.277509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.277520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.277725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.277737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.278017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.278029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.278162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.278174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.278277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.278288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.278403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.278415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.278581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.278593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.278836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.278848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.278973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.278985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.279247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.279259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.279493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.279504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.279636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.279648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.279902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.279914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.280045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.280057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.280335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.280347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.280632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.280643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.280905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.280916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.281121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.281133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.281247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.281260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.281520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.281532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.281818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.281830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.282092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.282104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.282367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.282380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.282594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.282605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.282793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.282805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.283019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.283030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.283216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.283231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.283443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.283455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.283713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.283725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.283900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.283914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.284101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.284113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.284379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.284392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.284572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.284584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.284869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.284881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.285094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.285105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.285290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.285303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.285563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.285574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.285837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.285848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.286083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.286094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.286298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.286312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.286575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.286587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.286782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.286793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.287076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.287088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.287327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.287339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.287451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.287462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.287638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.287650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.287885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.287896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.288069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.288081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.288247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.288268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.288450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.288462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.288671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.288683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.288929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.288941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.289182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.289193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.289380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.289392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.289649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.289661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.289899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.289910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.290189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.290210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.290471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.290488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.290735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.290750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.290940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.290955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.291230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.291247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.291358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.291373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.291645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.955 [2024-07-15 20:52:41.291660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.955 qpair failed and we were unable to recover it. 00:27:06.955 [2024-07-15 20:52:41.291911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.291927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.292197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.292212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.292502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.292516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.292768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.292780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.293074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.293086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.293368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.293381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.293562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.293576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.293827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.293839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.294072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.294084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.294202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.294213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.294397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.294408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.294669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.294681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.294873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.294885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.294992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.295003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.295263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.295275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.295514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.295525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.295713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.295725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.295959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.295971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.296235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.296248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.296352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.296363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.296530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.296542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.296744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.296755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.297014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.297026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.297239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.297252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.297535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.297547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.297715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.297727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.297911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.297923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.298040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.298051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.298259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.298271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.298529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.298542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.298723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.298735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.298926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.298938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.299128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.299140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.299326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.299347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.299635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.299651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.299792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.299808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.299996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.300011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.300299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.300315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.300501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.300517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.300791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.300806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.301009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.301024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.301208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.301228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.301445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.301460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.301726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.301741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.301985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.302000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.302125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.302139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.302381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.302401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.302593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.302608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.302878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.302893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.303077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.303093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.303362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.303377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.956 [2024-07-15 20:52:41.303653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.956 [2024-07-15 20:52:41.303668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.956 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.303881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.303896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.304166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.304181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.304430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.304446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.304564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.304579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.304835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.304851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.305109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.305125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.305385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.305401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.305657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.305673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.305923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.305939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.306206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.306221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.306468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.306484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.306658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.306674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.306851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.306867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.307042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.307058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.307252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.307268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.307533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.307549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.307665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.307679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.307825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.307840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.308082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.308098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.308343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.308358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.308606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.308621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.308888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.308909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.309181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.309198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.309415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.309428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.309712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.309724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.309936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.309948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.310206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.310218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.310483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.310494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.310732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.310743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.311002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.311014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.311293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.311305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.311541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.311553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.311808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.311820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.312003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.312015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.312208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.312222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.312398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.312411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.312599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.312610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.312790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.312803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.312976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.312987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.313223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.313239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.313457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.313469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.313755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.313767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.314023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.314035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.314279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.314291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.314409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.314683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.314695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.314898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.314910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.315162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.315174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.315352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.315364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.315549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.315560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.315797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.315809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.316012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.316024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.316258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.316270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.316528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.316540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.316728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.316740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.316929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.316940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.317123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.317136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.317401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.957 [2024-07-15 20:52:41.317414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.957 qpair failed and we were unable to recover it. 00:27:06.957 [2024-07-15 20:52:41.317601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.317613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.317816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.317828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.317996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.318008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.318306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.318325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.318566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.318584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.318793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.318808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.319024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.319039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.319235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.319250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.319439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.319454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.319721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.319736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.319927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.319943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.320238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.320254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.320498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.320513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.320639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.320654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.320851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.320866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.321169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.321185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.321370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.321391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.321664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.321679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.321859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.321874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.322088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.322103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.322326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.322342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.322551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.322566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.322785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.322800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.323066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.323082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.323351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.323367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.323636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.323651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.323834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.323849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.323981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.323996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.324260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.324276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.324495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.324511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.324732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.324747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.325004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.325020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.325264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.325280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.325480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.325495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.325718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.325733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.326000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.326016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.326288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.326304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.326573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.326589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.326859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.326874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.327091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.327107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.327295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.327310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.327456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.327472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.327688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.327703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.327911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.327928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.328113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.328128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.328386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.328401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.328706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.328721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.328925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.328940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.329148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.329163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.329432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.329448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.329643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.329658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.329872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.329887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.330100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.330115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.330303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.330319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.330586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.330601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.330824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.330839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.958 qpair failed and we were unable to recover it. 00:27:06.958 [2024-07-15 20:52:41.331104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.958 [2024-07-15 20:52:41.331119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.331335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.331351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.331544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.331559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.331804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.331819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.332083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.332097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.332300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.332315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.332560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.332575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.332786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.332801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.332993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.333008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.333275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.333291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.333418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.333432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.333652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.333667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.333864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.333879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.334094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.334109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.334308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.334324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.334599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.334614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.334887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.334903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.335167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.335183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.335481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.335496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.335768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.335783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.335996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.336011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.336139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.336154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.336348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.336364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.336607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.336622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.336933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.336948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.337217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.337238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.337498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.337513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.337696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.337713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.337857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.337873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.338163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.338178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.338366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.338382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.338595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.338610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.338724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.338737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.339023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.339039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.339305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.339320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.339589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.339604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.339847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.339863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.340055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.340071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.340270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.340286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.340535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.340550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.340684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.340699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.340883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.340898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.341023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.341039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.341335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.341351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.341574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.341590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.341783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.341799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.341984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.342000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.342231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.342247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.342491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.342506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.342752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.342768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.342976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.342991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.343293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.343308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.959 [2024-07-15 20:52:41.343574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.959 [2024-07-15 20:52:41.343589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.959 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.343838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.343853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.344153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.344169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.344290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.344306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.344557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.344573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.344823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.344839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.345096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.345111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.345376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.345393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.345586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.345601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.345896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.345911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.346179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.346194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.346443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.346459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.346752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.346767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.347036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.347049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.347175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.347187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.347434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.347450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.347658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.347670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.347796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.347809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.348047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.348059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.348281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.348294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.348470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.348482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.348682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.348695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.348952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.348965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.349244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.349257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.349547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.349559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.349691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.349703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.349886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.349899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.350169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.350183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.350375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.350390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.350593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.350607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.350801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.350815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.351027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.351041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.351255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.351269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.351490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.351504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.351696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.351710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.352005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.352019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.352289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.352303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.352555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.352569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.352812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.352828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.352969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.352984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.353236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.353252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.353373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.353388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.353643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.353659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.353906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.353921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.354117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.354132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.354346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.354362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.354551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.354566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.354780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.354794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.354976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.354991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.355103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.355117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.355387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.355403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.355536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.355550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.355763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.355779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.355923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.355938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.356065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.356080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.356261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.356278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.356524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.356539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.356716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.356731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.960 [2024-07-15 20:52:41.356974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.960 [2024-07-15 20:52:41.356990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.960 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.357242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.357258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.357396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.357410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.357680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.357695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.357887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.357902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.358079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.358094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.358354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.358370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.358604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.358619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.358865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.358879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.359090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.359104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.359282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.359298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.359465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.359480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.359671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.359686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.359862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.359877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.360024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.360040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.360219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.360239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.360426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.360441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.360624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.360639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.360841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.360857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.361109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.361124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.361344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.361360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.361615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.361630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.361812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.361827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.362014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.362029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.362176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.362191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.362396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.362411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.362682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.362698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.362889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.362903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.363106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.363121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.363315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.363330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.363597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.363613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.363810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.363825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.363963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.363978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.364222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.364242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.364453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.364468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.364689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.364705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.364907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.364922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.365131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.365148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.365315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.365331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.365556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.365571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.365769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.365784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.366028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.366044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.366219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.366246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.366438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.366453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.366744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.366759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.366901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.366916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.367101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.367117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.367349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.367365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.367614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.367629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.367901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.367916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.368114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.368130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.368311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.368326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.368515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.368530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.368729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.368743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.368952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.368963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.369175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.369186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.369444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.369456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.369566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.369578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.961 [2024-07-15 20:52:41.369760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.961 [2024-07-15 20:52:41.369771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.961 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.370026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.370039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.370293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.370306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.370489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.370502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.370626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.370637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.370891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.370903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.371111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.371122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.371314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.371326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.371579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.371590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.371826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.371837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.371969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.371981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.372235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.372247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.372453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.372465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.372743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.372754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.372943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.372955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.373230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.373242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.373436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.373447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.373718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.373730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.373927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.373939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.374109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.374122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.374410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.374422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.374605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.374617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.374819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.374831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.375080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.375092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.375277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.375288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.375531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.375543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.375748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.375760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.376019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.376030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.376222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.376238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.376495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.376507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.376693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.376704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.376902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.376913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.377095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.377107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.377351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.377364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.377599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.377611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.377849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.377861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.378057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.378069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.378333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.378345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.378548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.378559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.378746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.378758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.379011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.379023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.379229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.379242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.379535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.379548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.379799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.379810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.379989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.380001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.380201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.380213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.380479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.380491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.380746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.380758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.380998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.381010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.381210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.381222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.381422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.381433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.381710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.381722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.381848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.962 [2024-07-15 20:52:41.381860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.962 qpair failed and we were unable to recover it. 00:27:06.962 [2024-07-15 20:52:41.382060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.382071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.382284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.382295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.382532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.382544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.382675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.382686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.382794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.382805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.383082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.383094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.383322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.383335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.383619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.383631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.383809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.383821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.383986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.383997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.384197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.384208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.384409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.384421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.384543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.384554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.384830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.384843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.385033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.385045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.385279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.385291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.385528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.385539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.385781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.385792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.385977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.385989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.386229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.386241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.386488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.386500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.386745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.386757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.386923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.386935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.387050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.387062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.387273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.387285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.387444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.387456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.387701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.387713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.387948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.387960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.388234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.388246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.388519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.388531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.388714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.388726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.388938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.388950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.389058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.389069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.389268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.389281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.389524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.389536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.389655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.389666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.389788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.389799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.389991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.390002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.390129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.390141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.390344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.390357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.390651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.390662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.390853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.390864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.391030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.391042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.391215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.391230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.391479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.391493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.391728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.391740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.391911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.391926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.392123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.392135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.392330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.392343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.392580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.392592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.392720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.392731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.392921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.392933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.393047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.393059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.393161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.393170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.393373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.393385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.393583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.393595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.393780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.393793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.394057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.963 [2024-07-15 20:52:41.394069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.963 qpair failed and we were unable to recover it. 00:27:06.963 [2024-07-15 20:52:41.394172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.394182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.394443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.394456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.394587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.394600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.394789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.394801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.395107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.395120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.395357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.395369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.395630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.395646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.395823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.395835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.396025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.396037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.396160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.396172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:06.964 [2024-07-15 20:52:41.396365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.964 [2024-07-15 20:52:41.396377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:06.964 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.396589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.396603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.396737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.396748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.396962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.396975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.397162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.397174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.397395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.397417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.397550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.397564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.397714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.397728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.397865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.397878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.398012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.398026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.398269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.398285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.398495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.398511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.398650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.398666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.398780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.398795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.399001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.399017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.399212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.399231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.399460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.399475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.399672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.399687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.399871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.399889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.400086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.400100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.400344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.400359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.400493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.400508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.400712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.400727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.401013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.401029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.401284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.401300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.401481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.401496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.401722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.401738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.402020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.402034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.402298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.402314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.402566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.402582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.402854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.402869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.403065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.403080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.403264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.403280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.403478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.403493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.403670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.403685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.403896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.403912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.404085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.404100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.404320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.404336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.404603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.404618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.404805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.404820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.405121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.405136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.256 [2024-07-15 20:52:41.405400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.256 [2024-07-15 20:52:41.405416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.256 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.405675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.405691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.405871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.405886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.406130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.406145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.406378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.406398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.406637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.406653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.406917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.406932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.407177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.407192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.407398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.407414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.407706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.407721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.407933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.407948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.408141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.408156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.408314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.408331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.408531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.408545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.408796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.408810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.409006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.409021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.409269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.409284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.409491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.409510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.409782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.409800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.409983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.409998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.410211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.410231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.410409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.410425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.410560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.410576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.410754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.410769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.410961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.410976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.411174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.411189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.411318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.411334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.411561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.411576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.411772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.411787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.412021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.412036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.412183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.412198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.412405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.412421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.412624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.412641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.412885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.412901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.413193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.413209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.413476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.413492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.413762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.413778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.413974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.413990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.414204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.414219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.414431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.414447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.414696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.414710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.414965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.414979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.415164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.415180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.415426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.415441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.415707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.415732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.415910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.415925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.416132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.416148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.416376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.416393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.416638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.416654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.416897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.416913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.417161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.417176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.417444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.417460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.417705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.417720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.417913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.417928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.418127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.418143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.418362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.418377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.418646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.418661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.418896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.418911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.419115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.419131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.257 qpair failed and we were unable to recover it. 00:27:07.257 [2024-07-15 20:52:41.419260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.257 [2024-07-15 20:52:41.419277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.419454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.419470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.419689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.419704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.420024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.420039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.420204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.420219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.420493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.420508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.420720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.420735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.420939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.420954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.421141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.421156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.421283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.421299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.421495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.421510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.421696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.421711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.421976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.421996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.422126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.422141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.422329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.422344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.422636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.422651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.422851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.422867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.423085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.423100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.423342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.423358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.423570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.423585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.423720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.423735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.423874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.423890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.424138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.424153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.424327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.424343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.424518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.424533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.424711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.424727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.424927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.424942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.425139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.425154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.425345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.425360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.425491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.425506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.425774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.425789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.426008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.426023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.426212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.426231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.426479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.426495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.426676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.426691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.426957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.426972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.427183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.427198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.427448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.427463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.427583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.427598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.427729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.427745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.427936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.427951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.428151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.428167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.428343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.428359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.428628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.428643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.428847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.428862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.429039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.429054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.429183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.429198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.429393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.429409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.429549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.429564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.429763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.429779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.429987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.430002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.430140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.430156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.430351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.430367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.430637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.430652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.430786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.430802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.431090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.431105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.431373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.431388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.431517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.431533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.431751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.258 [2024-07-15 20:52:41.431766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.258 qpair failed and we were unable to recover it. 00:27:07.258 [2024-07-15 20:52:41.431960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.431975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.432216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.432235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.432505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.432521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.432649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.432664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.432869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.432884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.433093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.433108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.433352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.433368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.433546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.433563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.433703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.433718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.434004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.434019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.434200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.434215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.434435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.434458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.434723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.434735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.434995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.435007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.435135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.435147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.435370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.435382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.435616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.435628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.435893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.435904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.436164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.436175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.436457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.436469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.436648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.436660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.436855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.436867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.437058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.437070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.437241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.437253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.437457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.437469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.437656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.437667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.437799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.437810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.437915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.437926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.438162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.438174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.438390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.438403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.438585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.438598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.438803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.438816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.438987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.438998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.439158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.439168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.439344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.439358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.439640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.439652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.439834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.439846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.440123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.440134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.440413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.440424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.440595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.440607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.440777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.440789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.440966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.440977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.441216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.441232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.441493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.441505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.441625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.441637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.441840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.441852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.442093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.442104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.442357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.442369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.442481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.442492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.442699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.259 [2024-07-15 20:52:41.442712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.259 qpair failed and we were unable to recover it. 00:27:07.259 [2024-07-15 20:52:41.442889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.442901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.443116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.443128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.443365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.443377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.443562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.443574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.443759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.443771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.443979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.443991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.444154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.444166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.444390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.444402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.444597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.444609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.444711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.444721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.444925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.444937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.445105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.445117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.445373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.445385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.445596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.445608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.445876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.445888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.446126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.446137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.446324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.446336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.446545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.446557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.446734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.446746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.446991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.447003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.447112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.447124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.447327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.447339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.447522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.447533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.447783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.447795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.448030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.448044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.448237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.448249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.448434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.448448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.448699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.448710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.448898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.448910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.449102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.449115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.449375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.449388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.449647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.449660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.449774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.449787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.450086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.450099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.450304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.450317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.450564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.450575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.450781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.450793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.450964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.450975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.451213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.451229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.451353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.451366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.451571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.451582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.451776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.451789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.451975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.451987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.452166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.452178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.452385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.452398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.452632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.452644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.452839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.452852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.453056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.453068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.453355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.453367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.453538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.453550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.453756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.453768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.454031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.454042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.454149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.454159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.454417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.454429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.454631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.454643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.454774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.454786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.455043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.455056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.455356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.455369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.455584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.260 [2024-07-15 20:52:41.455597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.260 qpair failed and we were unable to recover it. 00:27:07.260 [2024-07-15 20:52:41.455782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.455794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.456037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.456049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.456260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.456272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.456558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.456570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.456689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.456701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.456961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.456977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.457229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.457241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.457366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.457378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.457551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.457562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.457732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.457745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.457919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.457930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.458183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.458195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.458386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.458398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.458599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.458610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.458796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.458808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.458979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.458991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.459122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.459134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.459241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.459252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.459421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.459433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.459624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.459636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.459807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.459819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.460013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.460025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.460291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.460303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.460559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.460571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.460705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.460718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.460829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.460842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.461014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.461026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.461286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.461297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.461466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.461478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.461656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.461667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.461914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.461927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.462138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.462150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.462365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.462377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.462650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.462661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.462861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.462873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.463087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.463098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.463281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.463293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.463460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.463471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.463597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.463609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.463778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.463790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.464001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.464013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.464183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.464195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.464327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.464339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.464530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.464542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.464787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.464799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.465086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.465099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.465356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.465368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.465553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.465565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.465821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.465832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.466072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.466084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.466339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.466351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.466562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.466574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.466858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.466870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.467048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.467061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.467179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.467189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.467358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.467370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.467639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.467651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.467833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.467845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.261 [2024-07-15 20:52:41.468028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.261 [2024-07-15 20:52:41.468040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.261 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.468305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.468317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.468501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.468513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.468695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.468707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.468991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.469003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.469185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.469197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.469440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.469452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.469634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.469646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.469815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.469827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.469943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.469954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.470189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.470200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.470464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.470477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.470732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.470744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.470928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.470940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.471174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.471186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.471368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.471380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.471572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.471583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.471793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.471806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.471977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.471989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.472104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.472116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.472285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.472297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.472565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.472577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.472833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.472845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.473135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.473147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.473408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.473420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.473590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.473602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.473863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.473874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.474053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.474067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.474239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.474252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.474375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.474387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.474582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.474593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.474839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.474851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.475110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.475122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.475357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.475369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.475649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.475661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.475942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.475954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.476190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.476202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.476466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.476478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.476659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.476671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.476910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.476921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.477106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.477118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.477290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.477303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.477490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.477502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.477758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.477770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.478008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.478020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.478215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.478237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.478503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.478514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.478804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.478816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.478985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.478997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.479204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.479216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.479479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.479492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.479708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.479719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.479902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.262 [2024-07-15 20:52:41.479914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.262 qpair failed and we were unable to recover it. 00:27:07.262 [2024-07-15 20:52:41.480101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.480113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.480286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.480297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.480478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.480490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.480723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.480735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.480974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.480986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.481259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.481272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.481513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.481525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.481708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.481719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.481977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.481988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.482167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.482179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.482445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.482457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.482689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.482701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.482886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.482898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.483075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.483088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.483263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.483276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.483513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.483525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.483642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.483654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.483828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.483839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.483976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.483988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.484097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.484108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.484315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.484326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.484467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.484479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.484723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.484734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.484859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.484872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.485041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.485053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.485314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.485326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.485437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.485448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.485706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.485718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.485889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.485901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.486136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.486147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.486316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.486328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.486589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.486601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.486835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.486847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.487022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.487035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.487300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.487313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.487572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.487584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.487822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.487834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.488122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.488133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.488320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.488332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.488598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.488610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.488895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.488907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.488927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8000 (9): Bad file descriptor 00:27:07.263 [2024-07-15 20:52:41.489150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.489175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.489446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.489463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.489648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.489664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.489845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.489861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.490146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.490161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.490451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.490466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.490719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.490734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.490930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.490945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.491251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.491267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.491464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.491480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.491741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.491756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.491954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.491969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.492186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.492199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.492381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.492393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.492580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.492592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.492825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.492836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.493005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.493017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.263 qpair failed and we were unable to recover it. 00:27:07.263 [2024-07-15 20:52:41.493254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.263 [2024-07-15 20:52:41.493266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.493446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.493458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.493637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.493649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.493895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.493907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.494146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.494159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.494393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.494405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.494528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.494540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.494801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.494813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.495025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.495038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.495170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.495184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.495371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.495383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.495644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.495656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.495850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.495862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.496069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.496081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.496293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.496311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.496506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.496517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.496801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.496813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.497028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.497040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.497228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.497240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.497524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.497536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.497747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.497759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.498013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.498025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.498283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.498296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.498538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.498549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.498803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.498815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.499092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.499104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.499297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.499309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.499507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.499519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.499709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.499721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.499971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.499983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.500170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.500182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.500495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.500507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.500690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.500702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.500886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.500899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.501161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.501172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.501379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.501392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.501578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.501590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.501760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.501772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.501952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.501964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.502150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.502162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.502374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.502387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.502654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.502666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.502802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.502814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.503072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.503083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.503287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.503299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.503472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.503483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.503681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.503693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.503887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.503899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.504083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.504095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.504338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.504352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.504523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.504535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.504793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.504804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.505058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.505070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.505312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.505324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.505518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.505530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.505768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.505780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.506029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.506041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.506214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.506229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.506490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.264 [2024-07-15 20:52:41.506501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.264 qpair failed and we were unable to recover it. 00:27:07.264 [2024-07-15 20:52:41.506672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.506684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.506944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.506956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.507191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.507203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.507334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.507346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.507613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.507625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.507888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.507899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.508075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.508087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.508199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.508211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.508463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.508475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.508602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.508614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.508846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.508857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.509123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.509135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.509299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.509310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.509575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.509586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.509842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.509854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.510139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.510152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.510415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.510427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.510653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.510665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.510835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.510847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.511107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.511119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.511313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.511325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.511609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.511621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.511810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.511822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.512062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.512074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.512243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.512255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.512512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.512523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.512777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.512788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.513034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.513046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.513298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.513311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.513571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.513583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.513821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.513835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.514087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.514099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.514313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.514326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.514511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.514523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.514715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.514727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.514915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.514928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.515114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.515126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.515320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.515333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.515518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.515530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.515783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.515795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.515965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.515976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.516213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.516229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.516508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.516520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.516691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.516703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.516878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.516890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.517151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.517163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.517347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.517359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.517540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.517551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.517731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.517743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.517915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.517927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.518196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.518208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.518448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.265 [2024-07-15 20:52:41.518460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-07-15 20:52:41.518706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.518718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.518984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.518996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.519244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.519256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.519449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.519460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.519631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.519643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.519854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.519866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.520038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.520050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.520235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.520247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.520512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.520524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.520702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.520714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.520973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.520986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.521229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.521241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.521486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.521497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.521734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.521746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.522011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.522022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.522214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.522230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.522406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.522418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.522607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.522619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.522854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.522868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.523044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.523055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.523174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.523186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.523424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.523436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.523643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.523655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.523830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.523842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.524011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.524022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.524259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.524273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.524378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.524390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.524670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.524681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.524929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.524941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.525129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.525141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.525312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.525324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.525522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.525534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.525797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.525809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.526081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.526093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.526327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.526339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.526597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.526610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.526849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.526861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.527120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.527132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.527384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.527396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.527575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.527586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.527756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.527768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.528052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.528064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.528250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.528261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.528470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.528481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.528722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.528734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.528986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.528998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.529257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.529269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.529510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.529521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.529776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.529787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.529973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.529985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.530247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.530259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.530513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.530524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.530762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.530773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.531007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.531019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.531290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.531302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.531548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.531560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.531801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.531812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.532077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.266 [2024-07-15 20:52:41.532089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-07-15 20:52:41.532352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.532367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.532551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.532562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.532690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.532702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.532876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.532889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.533060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.533072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.533339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.533351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.533585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.533596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.533855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.533867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.534033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.534045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.534169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.534179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.534431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.534442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.534566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.534578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.534813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.534825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.534992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.535004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.535215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.535232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.535424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.535435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.535628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.535639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.535810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.535822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.536085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.536097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.536351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.536363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.536490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.536502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.536750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.537008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.537019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.537220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.537235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.537417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.537429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.537673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.537684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.537885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.537896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.538201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.538213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.538405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.538417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.538652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.538665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.538853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.538864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.539144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.539156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.539269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.539281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.539392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.539405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.539639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.539651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.539832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.539843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.540110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.540121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.540382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.540394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.540627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.540638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.540825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.540836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.541071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.541086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.541324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.541336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.541597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.541608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.541865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.541877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.542056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.542068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.542256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.542268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.542504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.542516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.542776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.542788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.542971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.542983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.543175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.543187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.543466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.543478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.543666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.543677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.543953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.543965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.544245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.544257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.544468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.544479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.544641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.544652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.544788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.544799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.544984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.544996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.267 [2024-07-15 20:52:41.545256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.267 [2024-07-15 20:52:41.545269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.545457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.545470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.545759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.545770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.546007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.546019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.546187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.546199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.546329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.546341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.546602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.546614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.546873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.546885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.547118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.547130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.547433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.547455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.547726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.547742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.547934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.547949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.548081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.548097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.548299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.548315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.548513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.548528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.548797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.548812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.549002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.549018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.549208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.549222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.549373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.549389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.549574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.549590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.549775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.549790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.549993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.550006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.550187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.550200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.550465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.550476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.550646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.550658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.550840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.550852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.551115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.551128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.551311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.551322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.551462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.551474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.551646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.551658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.551910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.551921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.552159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.552171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.552436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.552448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.552703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.552715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.552943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.552954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.553140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.553151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.553390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.553401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.553590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.553601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.553809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.553821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.554060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.554071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.554333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.554345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.554516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.554528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.554710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.554722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.554979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.554991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.555231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.555243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.555413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.555425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.555684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.555695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.555810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.555822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.555988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.556000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.556257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.556271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.556530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.556542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.556711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.556723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.556958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.268 [2024-07-15 20:52:41.556970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.268 qpair failed and we were unable to recover it. 00:27:07.268 [2024-07-15 20:52:41.557204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.557216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.557408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.557421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.557589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.557601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.557834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.557846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.558105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.558117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.558283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.558295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.558500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.558512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.558706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.558717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.558915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.558927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.559209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.559221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.559469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.559480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.559678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.559689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.559925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.559936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.560124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.560135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.560313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.560324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.560587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.560600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.560793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.560805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.560979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.560991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.561148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.561159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.561345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.561356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.561615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.561626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.561858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.561870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.561977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.561987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.562188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.562200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.562315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.562328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.562576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.562588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.562848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.562860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.563045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.563057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.563230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.563243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.563515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.563527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.563740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.563751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.564038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.564050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.564309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.564322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.564528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.564540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.564813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.564824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.565029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.565041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.565346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.565360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.565602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.565614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.565788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.565800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.565987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.565999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.566213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.566229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.566370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.566381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.566548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.566559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.566831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.566842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.567039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.567051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.567174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.567185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.567453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.567465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.567586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.567597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.567800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.567812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.568045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.568057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.568271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.568283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.568472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.568484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.568721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.568732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.568936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.568948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.569159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.569171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.569341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.569354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.569543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.569555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.569789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.569800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.269 [2024-07-15 20:52:41.569980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.269 [2024-07-15 20:52:41.569992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.269 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.570231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.570244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.570461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.570474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.570770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.570781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.570997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.571008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.571256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.571269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.571511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.571523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.571692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.571704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.571940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.571952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.572130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.572141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.572317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.572329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.572530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.572542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.572798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.572809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.573089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.573101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.573335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.573347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.573607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.573619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.573799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.573811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.573983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.573996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.574232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.574246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.574458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.574469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.574756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.574768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.575024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.575036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.575219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.575234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.575353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.575365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.575624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.575636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.575814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.575826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.576061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.576073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.576175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.576186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.576364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.576377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.576613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.576625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.576814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.576826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.577022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.577034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.577143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.577154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.577322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.577333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.577586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.577598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.577831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.577843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.578078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.578091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.578283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.578295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.578464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.578476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.578746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.578757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.579054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.579066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.579276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.579288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.579467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.579479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.579723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.579735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.579943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.579954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.580248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.580261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.580435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.580447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.580692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.580703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.580973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.580985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.581247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.581259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.581432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.581444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.581639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.581651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.581913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.581925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.582047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.582058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.582240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.582253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.582453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.582464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.582710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.582722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.582915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.582926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.583169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.270 [2024-07-15 20:52:41.583183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.270 qpair failed and we were unable to recover it. 00:27:07.270 [2024-07-15 20:52:41.583422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.583434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.583627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.583638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.583896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.583907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.584107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.584119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.584331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.584343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.584607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.584619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.584804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.584816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.585102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.585113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.585304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.585316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.585583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.585594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.585807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.585819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.586012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.586024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.586239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.586251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.586512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.586523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.586790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.586802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.587084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.587096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.587333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.587345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.587580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.587592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.587862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.587873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.588118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.588130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.588396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.588409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.588581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.588593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.588855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.588867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.589128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.589140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.589307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.589319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.589577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.589589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.589825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.589837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.590012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.590023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.590260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.590272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.590443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.590455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.590717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.590728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.590898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.590910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.591124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.591136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.591313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.591325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.591514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.591526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.591711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.591723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.591856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.591868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.592054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.592067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.592326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.592338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.592599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.592614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.592876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.592888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.593138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.593150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.593337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.593349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.593606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.593617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.593796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.593808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.594006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.594019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.594198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.594210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.594526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.594538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.594791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.594803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.595068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.595080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.595314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.595326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.595579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.271 [2024-07-15 20:52:41.595591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.271 qpair failed and we were unable to recover it. 00:27:07.271 [2024-07-15 20:52:41.595802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.595814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.595997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.596009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.596124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.596135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.596323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.596334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.596539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.596551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.596732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.596744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.597050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.597061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.597269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.597281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.597412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.597423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.597613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.597625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.597807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.597819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.598081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.598093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.598348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.598360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.598549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.598561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.598820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.598832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.599005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.599016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.599188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.599199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.599384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.599396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.599587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.599598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.599854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.599865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.600108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.600120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.600400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.600411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.600596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.600608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.600785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.600797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.601057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.601068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.601262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.601275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.601464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.601476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.601713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.601727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.601962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.601974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.602160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.602172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.602360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.602372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.602567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.602579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.602788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.602800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.603007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.603019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.603253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.603265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.603516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.603527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.603700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.603712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.603845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.603857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.604116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.604128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.604332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.604344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.604462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.604473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.604660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.604672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.604867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.604879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.605113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.605125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.605314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.605326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.605522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.605534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.605770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.605782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.605895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.605907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.606197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.606208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.606430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.606442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.606624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.606635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.606814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.606826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.607028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.607040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.607366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.607378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.607506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.607525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.607709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.607725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.607996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.608011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.608266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.608282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.608423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.272 [2024-07-15 20:52:41.608438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.272 qpair failed and we were unable to recover it. 00:27:07.272 [2024-07-15 20:52:41.608650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.608665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.608883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.608899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.609025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.609041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.609180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.609195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.609412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.609425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.609672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.609684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.609920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.609932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.610099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.610111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.610242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.610254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.610461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.610473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.610607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.610619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.610879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.610891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.611061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.611073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.611309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.611321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.611504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.611516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.611775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.611787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.611976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.611987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.612165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.612177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.612382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.612394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.612611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.612622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.612859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.612871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.613109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.613121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.613303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.613316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.613486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.613498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.613761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.613772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.613952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.613964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.614200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.614211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.614405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.614417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.614607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.614619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.614811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.614823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.615074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.615085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.615207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.615219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.615352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.615364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.615484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.615496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.615754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.615765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.615883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.615897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.616080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.616092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.616281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.616293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.616477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.616489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.616669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.616681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.616932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.616943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.617150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.617162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.617399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.617411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.617597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.617609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.617864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.617875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.618054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.618066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.618239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.618252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.618434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.618447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.618647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.618659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.618870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.618882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.619084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.619095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.619241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.619253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.619434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.619446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.619637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.619649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.619829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.619841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.620022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.620034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.620222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.620239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.620409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.620420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.273 [2024-07-15 20:52:41.620625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.273 [2024-07-15 20:52:41.620636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.273 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.620871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.620883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.621097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.621108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.621317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.621330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.621510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.621522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.621702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.621714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.621949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.621961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.622161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.622173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.622409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.622421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.622681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.622693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.622953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.622964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.623133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.623145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.623325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.623337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.623577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.623589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.623826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.623838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.624102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.624113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.624304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.624316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.624551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.624565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.624740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.624751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.624943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.624955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.625220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.625241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.625475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.625486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.625695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.625706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.625964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.625976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.626160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.626171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.626375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.626387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.626637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.626649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.626819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.626831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.627003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.627015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.627201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.627212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.627450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.627462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.627593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.627605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.627868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.627880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.628000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.628012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.628190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.628201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.628371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.628384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.628559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.628570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.628807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.628819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.629083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.629095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.629369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.629380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.629617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.629628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.629818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.629829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.630088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.630099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.630285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.630298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.630533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.630544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.630718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.630731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.630910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.630922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.631221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.631237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.631412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.631424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.631623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.631635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.631971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.631982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.632162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.632174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.274 qpair failed and we were unable to recover it. 00:27:07.274 [2024-07-15 20:52:41.632298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.274 [2024-07-15 20:52:41.632310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.632480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.632492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.632698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.632709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.632885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.632896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.633186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.633198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.633383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.633398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.633583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.633595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.633709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.633722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.633899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.633911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.634195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.634207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.634397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.634409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.634680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.634692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.634870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.634882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.635087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.635100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.635236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.635249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.635508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.635520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.635756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.635767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.636047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.636058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.636334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.636346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.636546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.636558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.636732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.636743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.637060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.637072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.637256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.637268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.637504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.637517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.637775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.637788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.638045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.638057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.638242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.638254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.638441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.638453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.638710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.638721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.638841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.638853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.639041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.639053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.639289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.639301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.639565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.639576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.639758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.639769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.639906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.639917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.640182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.640193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.640433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.640444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.640612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.640624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.640874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.640886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.641149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.641160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.641335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.641346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.641459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.641472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.641669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.641681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.641886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.641897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.642131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.642143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.642399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.642412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.642595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.642606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.642784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.642797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.642975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.642987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.643244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.643257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.643445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.643457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.643625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.643637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.643890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.643902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.644140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.644152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.644416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.644429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.644638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.644650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.644766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.644778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.645037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.645049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.275 [2024-07-15 20:52:41.645218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.275 [2024-07-15 20:52:41.645233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.275 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.645440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.645453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.645649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.645661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.645783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.645794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.646046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.646058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.646312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.646324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.646506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.646517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.646750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.646762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.646990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.647001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.647130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.647143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.647382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.647393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.647595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.647606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.647871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.647883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.648118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.648129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.648349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.648362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.648570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.648582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.648840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.648855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.649042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.649055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.649163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.649175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.649493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.649507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.649759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.649772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.649960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.649973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.650161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.650177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.650369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.650382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.650570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.650582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.650761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.650775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.650958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.650970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.651207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.651223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.651466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.651478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.651666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.651679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.651874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.651888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.652131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.652143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.652422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.652435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.652699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.652712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.652896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.652907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.653120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.653132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.653320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.653332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.653539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.653551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.653773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.653785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.653969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.653981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.654232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.654244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.654459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.654470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.654650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.654661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.654842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.654854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.655082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.655094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.655280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.655292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.655462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.655473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.655616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.655627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.655861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.655873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.656005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.656017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.656194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.656206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.656468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.656480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.656673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.656684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.656882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.656894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.657156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.657168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.657281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.657291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.657483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.657496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.657778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.657790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.657962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.276 [2024-07-15 20:52:41.657973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.276 qpair failed and we were unable to recover it. 00:27:07.276 [2024-07-15 20:52:41.658210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.658221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.658485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.658497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.658752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.658763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.658871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.658881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.659075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.659087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.659347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.659359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.659637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.659648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.659827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.659839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.660066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.660080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.660318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.660329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.660499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.660511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.660690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.660701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.660939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.660951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.661109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.661121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.661374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.661387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.661643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.661654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.661848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.661859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.662117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.662129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.662328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.662340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.662521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.662532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.662655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.662668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.662990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.663001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.663242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.663255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.663426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.663438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.663644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.663656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.663826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.663837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.664044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.664055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.664314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.664327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.664597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.664609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.664844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.664856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.665118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.665130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.665297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.665309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.665494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.665506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.665682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.665693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.665821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.665833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.666022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.666034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.666230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.666242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.666431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.666442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.666612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.666623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.666930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.666942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.667126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.667138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.667310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.667322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.667499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.667512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.667751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.667763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.667934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.667946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.668184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.668197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.668462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.668473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.668673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.668685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.668935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.668949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.669185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.669197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.669315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.669327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.669565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.669576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.277 qpair failed and we were unable to recover it. 00:27:07.277 [2024-07-15 20:52:41.669808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.277 [2024-07-15 20:52:41.669820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.670106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.670117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.670295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.670307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.670494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.670505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.670767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.670779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.670989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.671001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.671179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.671191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.671437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.671449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.671622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.671634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.671815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.671827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.672100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.672112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.672290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.672302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.672491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.672502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.672617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.672629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.672814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.672827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.672954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.672966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.673129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.673141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.673423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.673436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.673722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.673734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.673854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.673865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.674046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.674057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.674259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.674271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.674576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.674588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.674858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.674869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.675000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.675011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.675134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.675145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.675346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.675359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.675608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.675620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.675880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.675893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.676152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.676164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.676422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.676434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.676619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.676631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.676809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.676821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.677079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.677091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.677375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.677387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.677624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.677636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.677891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.677904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.678116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.678128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.678400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.678412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.678590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.678602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.678836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.678847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.679030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.679042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.679183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.679194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.679368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.679380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.679614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.679626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.679859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.679870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.680112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.680123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.680294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.680306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.680428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.680439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.680616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.680628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.680891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.680903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.681083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.681094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.681295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.681307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.681507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.681518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.681700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.681712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.681965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.681976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.682241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.682253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.682512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.682524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.682764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.278 [2024-07-15 20:52:41.682775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.278 qpair failed and we were unable to recover it. 00:27:07.278 [2024-07-15 20:52:41.682967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.682978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.683212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.683228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.683490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.683501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.683751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.683763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.683951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.683963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.684196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.684207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.684474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.684486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.684658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.684670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.684831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.684842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.685128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.685140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.685307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.685318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.685434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.685446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.685726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.685737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.685998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.686009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.686267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.686278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.686489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.686500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.686698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.686710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.686894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.686907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.687076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.687089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.687258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.687270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.687551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.687563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.687804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.687816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.688036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.688047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.688165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.688176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.688378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.688390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.688602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.688614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.688733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.688745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.688986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.688998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.689177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.689189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.689318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.689330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.689585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.689597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.689796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.689808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.689994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.690006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.690193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.690205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.690391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.690403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.690526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.690538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.690717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.690729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.690900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.690912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.691175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.691186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.691443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.691454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.691745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.691757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.691973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.691984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.692164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.692175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.692437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.692449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.692633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.692652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.692945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.692961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.693235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.693251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.693495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.693510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.693711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.693726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.693842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.693857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.694072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.694087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.694325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.694340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.694483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.694498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.694694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.694709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.694956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.694971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.695221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.695250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.695500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.695515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.695762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.695778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.695919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.279 [2024-07-15 20:52:41.695934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.279 qpair failed and we were unable to recover it. 00:27:07.279 [2024-07-15 20:52:41.696112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.696127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.696332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.696347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.696593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.696608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.696911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.696926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.697060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.697074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.697189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.697202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.697390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.697406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.697592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.697607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.697811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.697826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.698047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.698062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.698330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.698345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.698595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.698610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.698904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.698922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.699102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.699117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.699234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.699248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.699495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.699510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.699706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.699721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.699935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.699950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.700243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.700258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.700523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.700538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.700673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.700688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.700819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.700834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.701013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.701028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.701203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.701219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.701468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.701483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.701669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.701684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.701906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.701922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.702237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.702250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.702448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.702460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.702639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.702650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.702856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.702867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.702964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.702974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.703209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.703221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.703367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.703379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.703565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.703577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.703811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.703823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.704081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.704093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.704266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.704278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.704514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.704527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.704693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.704706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.704815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.704825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.705083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.705096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.705279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.705292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.705464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.705476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.705664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.705676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.705955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.705967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.706097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.706109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.706356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.706368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.706579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.706591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.706790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.706801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.707035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.707047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.707282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.707294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.280 qpair failed and we were unable to recover it. 00:27:07.280 [2024-07-15 20:52:41.707486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.280 [2024-07-15 20:52:41.707499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.707734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.707745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.707945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.707957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.708144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.708156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.708266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.708277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.708464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.708475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.708597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.708609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.708868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.708880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.709111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.709122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.709305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.709318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.709503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.709515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.709681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.709693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.709792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.709803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.710011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.710022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.710221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.710236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.710425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.710438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.710564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.710576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.710822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.710833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.711001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.711013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.711273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.711285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.711452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.711463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.711655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.711669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.711923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.711935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.712124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.712136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.712315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.712327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.712528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.712541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.712810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.712821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.713071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.713085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.713309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.713321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.713502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.713514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.713696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.713708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.713895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.713907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.714043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.714055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.714245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.714258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.714467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.714478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.714684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.714696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.714807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.714822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.715035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.715047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.715264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.715276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.715513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.715528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.715814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.715827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.715955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.715967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.716154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.716165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.281 [2024-07-15 20:52:41.716341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.281 [2024-07-15 20:52:41.716353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.281 qpair failed and we were unable to recover it. 00:27:07.561 [2024-07-15 20:52:41.716589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.561 [2024-07-15 20:52:41.716601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.561 qpair failed and we were unable to recover it. 00:27:07.561 [2024-07-15 20:52:41.716785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.561 [2024-07-15 20:52:41.716798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.561 qpair failed and we were unable to recover it. 00:27:07.561 [2024-07-15 20:52:41.716987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.561 [2024-07-15 20:52:41.716999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.561 qpair failed and we were unable to recover it. 00:27:07.561 [2024-07-15 20:52:41.717257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.561 [2024-07-15 20:52:41.717270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.561 qpair failed and we were unable to recover it. 00:27:07.561 [2024-07-15 20:52:41.717542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.561 [2024-07-15 20:52:41.717554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.561 qpair failed and we were unable to recover it. 00:27:07.561 [2024-07-15 20:52:41.717764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.561 [2024-07-15 20:52:41.717775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.561 qpair failed and we were unable to recover it. 00:27:07.561 [2024-07-15 20:52:41.717990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.561 [2024-07-15 20:52:41.718001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.561 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.718113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.718125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.718322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.718335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.718576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.718589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.718795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.718806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.719068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.719081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.719378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.719390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.719561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.719574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.719819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.719831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.719948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.719959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.720168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.720180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.720393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.720406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.720671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.720682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.720803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.720815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.720982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.720994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.721183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.721195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.721455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.721467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.721653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.721666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.721901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.721912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.722102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.722113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.722308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.722320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.722568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.722580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.722815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.722827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.723061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.723073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.723354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.723366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.723607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.723619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.723858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.723869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.724058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.724069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.724257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.724269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.724528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.724540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.724668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.724680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.724963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.724975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.725245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.725256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.725484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.725496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.562 [2024-07-15 20:52:41.725683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.562 [2024-07-15 20:52:41.725695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.562 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.725933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.725945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.726146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.726158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.726346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.726358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.726556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.726567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.726751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.726763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.726887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.726897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.727126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.727138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.727270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.727282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.727475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.727487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.727660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.727671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.727918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.727930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.728190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.728201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.728385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.728398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.728585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.728596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.728848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.728860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.729121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.729133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.729274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.729286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.729473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.729485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.729737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.729749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.729984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.729995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.730157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.730168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.730426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.730438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.730610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.730624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.730743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.730755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.730955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.730966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.731146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.731157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.731408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.731420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.731610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.731622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.731814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.731825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.732056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.732068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.732269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.732281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.732483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.732494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.732758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.732770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.732969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.732980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.733261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.733273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.733461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.733473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.733665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.733676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.733848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.733860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.734117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.734129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.734361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.563 [2024-07-15 20:52:41.734373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.563 qpair failed and we were unable to recover it. 00:27:07.563 [2024-07-15 20:52:41.734607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.734618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.734802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.734814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.734944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.734955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.735084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.735095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.735352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.735364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.735466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.735477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.735660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.735672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.735931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.735943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.736204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.736216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.736395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.736408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.736644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.736656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.736919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.736931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.737187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.737198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.737455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.737467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.737600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.737612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.737824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.737836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.738022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.738034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.738269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.738282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.738542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.738554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.738810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.738822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.738994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.739005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.739266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.739278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.739483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.739497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.739738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.739750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.739938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.739950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.740183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.740195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.740391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.740403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.740595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.740607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.740853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.740864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.741035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.741047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.741180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.741192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.741383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.741396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.741638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.741650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.741947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.741958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.742220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.742235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.742518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.742530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.564 qpair failed and we were unable to recover it. 00:27:07.564 [2024-07-15 20:52:41.742661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.564 [2024-07-15 20:52:41.742672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.742930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.742942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.743112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.743123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.743361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.743372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.743633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.743645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.743756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.743767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.743976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.743988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.744259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.744271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.744379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.744390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.744644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.744655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.744898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.744910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.745106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.745117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.745379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.745391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.745675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.745687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.745945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.745957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.746085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.746098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.746348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.746360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.746633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.746645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.746834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.746845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.747051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.747063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.747232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.747244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.747505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.747517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.747770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.747782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.748037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.748049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.748293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.748306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.748542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.748554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.748724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.748738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.748930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.748941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.749132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.749144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.749388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.749400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.749571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.749583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.749847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.749859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.750059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.750071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.750355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.750368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.750541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.750552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.750836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.750847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.751102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.751114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.751372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.751384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.751619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.751631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.751865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.751877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.752003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.565 [2024-07-15 20:52:41.752015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.565 qpair failed and we were unable to recover it. 00:27:07.565 [2024-07-15 20:52:41.752283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.752295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.752504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.752516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.752686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.752697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.752820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.752832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.753091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.753102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.753338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.753350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.753532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.753544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.753779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.753790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.753977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.753989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.754156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.754168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.754428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.754440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.754619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.754630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.754871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.754883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.755140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.755152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.755334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.755346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.755532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.755544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.755649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.755660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.755923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.755935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.756114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.756126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.756310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.756321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.756581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.756593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.756856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.756868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.757150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.757162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.757421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.757433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.757665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.757677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.757941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.757954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.758232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.758244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.758480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.758491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.758671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.758683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.758882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.758893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.759128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.759139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.759401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.759412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.759598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.759611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.566 [2024-07-15 20:52:41.759846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.566 [2024-07-15 20:52:41.759858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.566 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.760039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.760051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.760287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.760300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.760560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.760571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.760755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.760767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.761004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.761015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.761195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.761207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.761390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.761403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.761673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.761684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.761933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.761944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.762182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.762194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.762478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.762491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.762707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.762718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.762892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.762904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.763092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.763105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.763296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.763307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.763497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.763508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.763705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.763717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.763903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.763914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.764162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.764174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.764358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.764371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.764546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.764557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.764794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.764806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.764993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.765005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.765174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.765186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.765361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.765374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.765639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.765650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.765834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.765846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.766057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.766069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.766256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.766268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.766462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.766474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.766607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.766618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.766828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.766841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.767079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.767090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.567 qpair failed and we were unable to recover it. 00:27:07.567 [2024-07-15 20:52:41.767251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.567 [2024-07-15 20:52:41.767262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.767471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.767483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.767715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.767727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.767896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.767908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.768106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.768118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.768329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.768341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.768578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.768589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.768842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.768854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.768972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.768984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.769250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.769262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.769512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.769524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.769764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.769775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.770038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.770050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.770297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.770309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.770424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.770436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.770613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.770624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.770754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.770766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.770880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.770892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.771097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.771108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.771287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.771300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.771544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.771556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.771727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.771739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.771853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.771865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.772131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.772143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.772389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.772400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.772643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.772655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.772822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.772834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.773023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.773035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.773293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.773306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.773600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.773612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.773865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.773877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.774118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.774130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.774259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.774272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.774532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.774543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.774666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.774678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.774800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.774812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.775070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.775082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.775267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.775278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.568 [2024-07-15 20:52:41.775513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.568 [2024-07-15 20:52:41.775526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.568 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.775656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.775667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.775901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.775913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.776081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.776094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.776314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.776326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.776574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.776586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.776862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.776874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.777094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.777105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.777293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.777306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.777484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.777496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.777784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.777796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.777969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.777980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.778268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.778280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.778556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.778568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.778817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.778829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.778961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.778973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.779220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.779235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.779346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.779358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.779610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.779621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.779880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.779891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.780132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.780144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.780333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.780345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.780527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.780538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.780775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.780787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.781020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.781032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.781227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.781239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.781409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.781421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.781686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.781698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.781867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.781880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.782048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.782060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.782233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.782245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.782504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.782516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.782765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.782777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.783026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.783038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.783169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.783182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.783425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.569 [2024-07-15 20:52:41.783437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.569 qpair failed and we were unable to recover it. 00:27:07.569 [2024-07-15 20:52:41.783695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.783706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.783913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.783925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.784200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.784211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.784476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.784488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.784668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.784681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.784916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.784928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.785107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.785119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.785376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.785388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.785647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.785659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.785842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.785853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.786088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.786100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.786367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.786379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.786630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.786642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.786880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.786892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.787157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.787168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.787422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.787434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.787548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.787560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.787793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.787806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.787975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.787987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.788123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.788135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.788371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.788383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.788643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.788655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.788913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.788925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.789158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.789169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.789428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.789440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.789700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.789712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.789963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.789975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.790211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.790223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.790488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.790499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.790634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.790646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.790901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.790913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.791153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.791165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.791419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.791431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.791690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.791702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.791946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.791957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.792214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.792229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.792470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.570 [2024-07-15 20:52:41.792483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.570 qpair failed and we were unable to recover it. 00:27:07.570 [2024-07-15 20:52:41.792734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.792746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.793002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.793014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.793256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.793268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.793452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.793465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.793718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.793729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.793995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.794007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.794235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.794248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.794503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.794517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.794782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.794794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.795050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.795062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.795233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.795246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.795427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.795438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.795568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.795579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.795840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.795852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.796154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.796166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.796356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.796368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.796632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.796643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.796834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.796847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.797131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.797144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.797333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.797344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.797473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.797485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.797723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.797735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.797999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.798011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.798115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.798125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.798304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.798317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.798575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.798587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.798775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.571 [2024-07-15 20:52:41.798787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.571 qpair failed and we were unable to recover it. 00:27:07.571 [2024-07-15 20:52:41.798977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.798989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.799251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.799263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.799520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.799531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.799732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.799743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.800033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.800045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.800246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.800258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.800515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.800527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.800750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.800762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.800898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.800909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.801172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.801183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.801465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.801477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.801740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.801751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.801957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.801968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.802245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.802257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.802448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.802459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.802727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.802739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.802944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.802956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.803141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.803152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.803322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.803335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.803468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.803480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.803714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.803728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.803984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.803995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.804259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.804271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.804386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.804398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.804601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.804612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.804860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.804872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.805121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.805133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.805323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.805335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.805508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.805520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.805758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.805770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.805965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.805976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.806240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.806253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.806508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.806520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.806733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.806745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.807015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.807027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.807131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.807141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.807329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.807341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.807532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.807543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.572 [2024-07-15 20:52:41.807747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.572 [2024-07-15 20:52:41.807759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.572 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.807996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.808008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.808244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.808262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.808435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.808447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.808618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.808629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.808800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.808812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.809047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.809059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.809304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.809317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.809505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.809517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.809831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.809844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.810025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.810037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.810234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.810245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.810499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.810510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.810717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.810728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.810912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.810924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.811213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.811234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.811411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.811423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.811623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.811635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.811844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.811855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.812140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.812152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.812352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.812364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.812542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.812555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.812810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.812822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.813061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.813073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.813259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.813271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.813525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.813537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.813675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.813687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.813871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.813883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.814070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.814082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.814272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.814284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.814401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.814413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.814530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.814542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.814777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.814790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.815055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.815067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.815339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.815351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.815519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.815531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.815702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.815715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.815952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.573 [2024-07-15 20:52:41.815964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.573 qpair failed and we were unable to recover it. 00:27:07.573 [2024-07-15 20:52:41.816174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.816186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.816307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.816319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.816512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.816524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.816793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.816805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.817048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.817060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.817310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.817322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.817610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.817622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.817904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.817916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.818150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.818162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.818421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.818433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.818602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.818614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.818793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.818807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.819069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.819081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.819267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.819280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.819575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.819587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.819778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.819790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.820073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.820084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.820270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.820283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.820501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.820512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.820773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.820784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.820917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.820929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.821103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.821115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.821394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.821407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.821612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.821624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.821803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.821815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.822011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.822023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.822232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.822244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.822477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.822489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.822762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.822773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.823020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.823031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.823272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.823284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.823555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.823567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.574 [2024-07-15 20:52:41.823746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.574 [2024-07-15 20:52:41.823758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.574 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.823972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.823984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.824240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.824253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.824421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.824433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.824616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.824628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.824905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.824917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.825039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.825051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.825238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.825250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.825484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.825496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.825614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.825626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.825796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.825808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.826044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.826056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.826341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.826352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.826540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.826552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.826806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.826818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.827014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.827025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.827297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.827310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.827550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.827561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.827812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.827823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.828064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.828078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.828249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.828261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.828437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.828449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.828711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.828722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.828959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.828971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.829246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.829258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.829497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.829508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.829694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.829706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.829892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.829904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.830098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.830110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.830233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.830245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.830494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.830505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.830707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.830719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.830997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.831008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.831294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.831306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.831473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.575 [2024-07-15 20:52:41.831484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.575 qpair failed and we were unable to recover it. 00:27:07.575 [2024-07-15 20:52:41.831772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.831784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.832068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.832079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.832268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.832280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.832480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.832492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.832719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.832731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.832965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.832977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.833256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.833269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.833556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.833568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.833737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.833749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.834033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.834045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.834328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.834340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.834600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.834612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.834783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.834794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.835055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.835066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.835323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.835335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.835620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.835631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.835840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.835852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.836111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.836122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.836372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.836384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.836626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.836637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.836899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.836911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.837184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.837195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.837430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.837442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.837705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.837716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.837921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.837934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.838094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.838105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.838367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.838378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.838612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.838623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.838882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.838893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.839073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.839085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.839280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.839292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.839468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.839480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.839666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.839677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.839854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.839866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.840049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.840061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.840242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.840253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.840441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.840453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.840629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.840642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.840898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.576 [2024-07-15 20:52:41.840910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.576 qpair failed and we were unable to recover it. 00:27:07.576 [2024-07-15 20:52:41.841097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.841109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.841282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.841294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.841465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.841477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.841732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.841744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.841929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.841941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.842058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.842071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.842334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.842347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.842452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.842465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.842633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.842645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.842776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.842788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.842979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.842991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.843192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.843204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.843325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.843337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.843530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.843542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.843721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.843734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.844029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.844040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.844232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.844244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.844421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.844432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.844564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.844576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.844833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.844844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.845082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.845093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.845354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.845366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.845551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.845562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.845754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.845765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.845972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.845983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.846202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.846215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.846529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.846541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.846666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.846678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.846939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.846950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.847200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.847211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.847399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.847411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.847546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.847557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.847746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.847758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.847995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.848006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.848138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.848150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.848337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.848350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.848586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.848597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.848773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.848784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.848992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.849004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.849206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.577 [2024-07-15 20:52:41.849218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.577 qpair failed and we were unable to recover it. 00:27:07.577 [2024-07-15 20:52:41.849404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.849416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.849585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.849596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.849878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.849890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.850127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.850139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.850347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.850359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.850622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.850633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.850811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.850823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.851090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.851101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.851313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.851324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.851593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.851604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.851774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.851784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.852051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.852062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.852301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.852314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.852500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.852511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.852628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.852640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.852829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.852840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.853075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.853086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.853349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.853360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.853620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.853631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.853811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.853823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.854041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.854052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.854175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.854187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.854469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.854481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.854678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.854690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.854980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.854992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.855117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.855130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.855344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.855357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.855533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.855545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.855844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.855856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.855984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.855996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.856277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.856290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.856417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.856429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.856669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.856680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.856954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.856966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.857151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.857163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.857424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.857436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.857616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.857627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.857820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.857831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.858008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.858020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.858210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.858232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.858345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.578 [2024-07-15 20:52:41.858358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.578 qpair failed and we were unable to recover it. 00:27:07.578 [2024-07-15 20:52:41.858481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.858493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.858624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.858636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.858812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.858825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.859062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.859074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.859340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.859355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.859551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.859564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.859820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.859832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.859967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.859979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.860252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.860264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.860401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.860412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.860609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.860620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.860863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.860874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.861006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.861018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.861279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.861292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.861479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.861490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.861677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.861689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.861807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.861818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.862011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.862023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.862285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.862296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.862474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.862486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.862681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.862692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.862988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.863000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.863177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.863189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.863382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.863394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.863513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.863528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.863670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.863682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.863815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.863827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.863996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.864007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.864129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.864141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.864336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.864349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.864483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.864496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.864734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.864746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.865003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.865014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.865272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.865284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.865452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.865463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.865644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.865656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.865840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.865851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.866035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.866046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.866239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.866252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.866446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.866457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.579 [2024-07-15 20:52:41.866706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.579 [2024-07-15 20:52:41.866718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.579 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.866931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.866942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.867231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.867243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.867469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.867481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.867742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.867753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.868015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.868027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.868325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.868337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.868521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.868533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.868703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.868714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.869022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.869034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.869270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.869282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.869587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.869599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.869776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.869788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.869991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.870002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.870178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.870190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.870403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.870415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.870688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.870699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.870944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.870956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.871083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.871095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.871360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.871372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.871563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.871574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.871808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.871820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.872056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.872068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.872281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.872293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.872407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.872422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.872680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.872692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.872902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.872914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.873028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.873038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.873246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.873257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.873425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.873437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.873606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.873618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.873801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.873813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.874091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.874103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.874287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.874298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.874486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.874498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.580 [2024-07-15 20:52:41.874625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.580 [2024-07-15 20:52:41.874637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.580 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.874816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.874828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.874949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.874959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.875220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.875236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.875418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.875430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.875570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.875581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.875748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.875760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.875936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.875947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.876174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.876186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.876425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.876437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.876699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.876710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.876884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.876896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.877136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.877148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.877398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.877410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.877677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.877689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.877917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.877928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.878106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.878118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.878403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.878422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.878621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.878632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.878868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.878880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.879114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.879127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.879309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.879322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.879441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.879453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.879606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.879618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.879807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.879818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.880074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.880085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.880384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.880396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.880538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.880550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.880740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.880751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.881420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.881446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.881666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.881677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.881934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.881946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.882114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.882125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.882372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.882384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.882566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.882578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.882719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.882731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.882920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.882932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.883213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.883236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.883448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.883460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.883619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.883632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.883787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.581 [2024-07-15 20:52:41.883799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.581 qpair failed and we were unable to recover it. 00:27:07.581 [2024-07-15 20:52:41.884059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.884072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.884339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.884352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.884545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.884556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.884686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.884697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.884880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.884892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.885080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.885091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.885359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.885371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.885560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.885571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.885701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.885712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.885897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.885909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.886146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.886157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.886385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.886398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.886532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.886543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.886806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.886818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.886950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.886961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.887257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.887278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.887419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.887438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.887648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.887663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.887848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.887863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.888118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.888134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.888320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.888336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.888528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.888544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.888737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.888754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.888891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.888906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.889150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.889165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.889355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.889371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.889498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.889513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.889762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.889777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.890010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.890028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.890140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.890155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.890320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.890336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.890534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.890550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.890692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.890707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.890963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.890978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.891192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.891208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.891415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.891430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.891624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.891639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.891974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.891990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.892212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.892231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.892438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.892453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.892638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.582 [2024-07-15 20:52:41.892654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.582 qpair failed and we were unable to recover it. 00:27:07.582 [2024-07-15 20:52:41.892787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.892802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.893007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.893022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.893221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.893241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.893385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.893400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.893592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.893609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.893802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.893817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.894004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.894019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.894139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.894155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.894398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.894414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.894648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.894666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.894852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.894868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.895001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.895016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.895301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.895316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.895465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.895480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.895690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.895715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.895911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.895927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.896129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.896144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.896362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.896378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.896570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.896585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.896821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.896836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.897109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.897124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.897316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.897332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.897527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.897543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.897722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.897739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.897940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.897955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.898121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.898137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.898348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.898364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.898613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.898633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.898943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.898959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.899154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.899169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.899300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.899315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.900172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.900199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.900488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.900504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.900750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.900765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.900889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.900905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.901112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.901126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.901325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.901340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.901475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.901490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.901733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.901751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.902080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.902094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.583 [2024-07-15 20:52:41.902362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.583 [2024-07-15 20:52:41.902378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.583 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.902643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.902659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.902901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.902916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.903117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.903133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.903332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.903347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.903480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.903495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.903700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.903715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.903849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.903865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.904006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.904022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.904202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.904219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.904497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.904513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.904710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.904726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.904920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.904935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.905208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.905223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.905438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.905467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.905736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.905749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.905991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.906002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.906114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.906126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.906260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.906271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.906378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.906390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.906523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.906535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.906662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.906674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.906806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.906818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.907000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.907011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.907220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.907236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.907402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.907415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.907603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.907614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.907818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.907833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.907976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.907988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.908093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.908105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.908207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.908217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.908364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.908375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.908565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.908578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.908702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.908713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.908976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.908988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.909178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.584 [2024-07-15 20:52:41.909190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.584 qpair failed and we were unable to recover it. 00:27:07.584 [2024-07-15 20:52:41.909304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.909316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.909537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.909549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.909734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.909746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.910026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.910037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.910218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.910235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.910481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.910493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.910664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.910676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.910844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.910856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.911098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.911110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.911308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.911319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.911433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.911445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.911719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.911730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.911899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.911912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.912093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.912105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.912293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.912311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.912547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.912561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.912691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.912703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.912976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.912987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.913107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.913119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.913376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.913388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.913644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.913656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.913849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.913861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.914046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.914058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.914316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.914329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.914566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.914581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.914683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.914694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.914819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.914831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.915016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.915027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.915149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.915160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.915394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.915406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.915558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.915569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.915829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.915840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.916018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.916030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.916294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.916306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.916432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.916443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.916581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.916592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.916711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.916722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.916845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.916856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.916984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.916996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.917188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.917200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.917378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.585 [2024-07-15 20:52:41.917390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.585 qpair failed and we were unable to recover it. 00:27:07.585 [2024-07-15 20:52:41.917639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.917651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.917779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.917791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.918044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.918056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.918242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.918254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.918517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.918529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.918705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.918716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.918980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.918991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.919176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.919188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.919492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.919504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.919738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.919750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.919979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.919991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.920175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.920187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.920430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.920443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.920698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.920710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.920900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.920912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.921030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.921041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.921259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.921271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.921488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.921501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.921700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.921712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.921992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.922003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.922115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.922127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.922398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.922410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.922613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.922625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.922812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.922823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.923083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.923095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.923310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.923322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.923440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.923451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.923660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.923671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.923907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.923918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.924103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.924115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.924296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.924308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.924514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.924526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.924667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.924680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.924989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.925001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.925109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.925121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.925381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.925392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.925517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.925528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.925654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.925665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.925882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.925894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.926128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.926140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.586 [2024-07-15 20:52:41.926331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.586 [2024-07-15 20:52:41.926343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.586 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.926568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.926580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.926763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.926774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.926961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.926973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.927151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.927162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.927424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.927436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.927671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.927683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.927817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.927828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.928085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.928096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.928271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.928283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.928422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.928434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.928546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.928558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.928744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.928756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.928936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.928948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.929200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.929211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.929415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.929427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.929566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.929577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.929701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.929714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.929898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.929909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.930093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.930105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.930288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.930301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.930563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.930575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.930817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.930829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.931078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.931090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.931284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.931296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.931486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.931497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.931685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.931696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.931833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.931844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.932031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.932043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.932170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.932181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.932416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.932428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.932608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.932620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.932859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.932870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.933142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.933153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.933328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.933340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.933445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.933455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.933585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.933596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.933811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.933823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.934025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.934036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.934161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.934174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.934361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.587 [2024-07-15 20:52:41.934373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.587 qpair failed and we were unable to recover it. 00:27:07.587 [2024-07-15 20:52:41.934560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.934572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.934766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.934778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.934996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.935007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.935117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.935127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.935386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.935398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.935648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.935660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.935841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.935854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.936035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.936047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.936233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.936245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.936432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.936444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.936631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.936643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.936831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.936842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.937012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.937023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.937199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.937210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.937377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.937388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.937570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.937583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.937706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.937720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.937937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.937949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.938135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.938147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.938257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.938268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.938403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.938416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.938541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.938553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.938676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.938688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.938858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.938870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.939080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.939091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.939205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.939216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.939494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.939505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.939697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.939708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.940006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.940017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.940197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.940209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.940404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.940415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.941278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.941302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.941460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.941473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.941611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.941624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.941812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.941823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.942453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.588 [2024-07-15 20:52:41.942474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.588 qpair failed and we were unable to recover it. 00:27:07.588 [2024-07-15 20:52:41.942695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.942708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.942886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.942899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.943087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.943098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.943233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.943245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.943405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.943417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.943660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.943672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.943872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.943884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.944081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.944093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.944234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.944246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.944376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.944388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.944573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.944585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.944714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.944725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.944913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.944924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.945088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.945099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.945280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.945292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.945466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.945477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.945646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.945657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.945900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.945911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.946019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.946031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.946324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.946336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.946523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.946536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.946727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.946739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.946973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.946985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.947154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.947165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.947347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.947358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.947546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.947558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.947682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.947694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.947802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.947812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.948050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.948062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.948281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.948293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.948526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.948538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.948643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.948653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.948766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.948776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.948991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.949005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.949213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.949230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.949482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.949494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.949606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.949616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.949745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.949757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.950014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.950026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.950268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.950280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.589 [2024-07-15 20:52:41.950515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.589 [2024-07-15 20:52:41.950527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.589 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.950734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.950746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.951024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.951036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.951300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.951312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.951446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.951458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.951566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.951577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.951754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.951766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.952023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.952034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.952156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.952168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.952364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.952376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.952546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.952558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.952676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.952687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.952820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.952831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.953112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.953123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.953377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.953389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.953520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.953532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.953791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.953803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.953914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.953925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.954803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.954826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.955105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.955118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.955314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.955328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.955958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.955978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.956185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.956197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.956375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.956388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.956516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.956528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.956720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.956732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.957006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.957018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.957201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.957213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.957442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.957454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.957571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.957584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.957705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.957716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.957840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.957852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.958029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.958041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.958287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.958299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.958480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.958492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.958604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.958617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.958803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.958815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.958942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.958953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.959077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.959089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.959223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.959240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.959479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.590 [2024-07-15 20:52:41.959490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.590 qpair failed and we were unable to recover it. 00:27:07.590 [2024-07-15 20:52:41.959678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.959690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.959824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.959836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.960072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.960084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.960323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.960336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.960473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.960484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.960606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.960617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.960730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.960742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.960977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.960989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.961180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.961192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.961417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.961429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.961616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.961627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.961739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.961752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.961884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.961896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.962155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.962167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.962406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.962418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.962628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.962640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.962769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.962781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.963070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.963082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.963341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.963353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.963496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.963510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.963699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.963712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.963837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.963847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.963975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.963987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.964106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.964118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.964325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.964337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.964529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.964540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.964714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.964726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.965031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.965043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.965307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.965320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.965492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.965504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.965695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.965707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.965989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.966001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.966139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.966151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.966395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.966408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.966643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.966655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.966843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.966855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.967040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.967052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.967291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.967303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.967430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.967442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.967579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.967591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.967791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.591 [2024-07-15 20:52:41.967803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.591 qpair failed and we were unable to recover it. 00:27:07.591 [2024-07-15 20:52:41.968012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.968024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.968209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.968220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.968403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.968415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.968595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.968606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.968737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.968749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.968864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.968877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.969062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.969074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.969329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.969342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.969600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.969612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.969841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.969853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.970129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.970141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.970267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.970278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.970465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.970477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.970615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.970626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.970804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.970816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.971085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.971096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.971269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.971282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.971418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.971430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.971679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.971692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.971824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.971836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.972021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.972033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.972294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.972306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.972430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.972442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.972648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.972660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.972783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.972795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.972914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.972926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.973137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.973148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.973344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.973356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.973533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.973545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.973716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.973727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.974011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.974022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.974272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.974284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.974491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.974503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.974757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.974769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.974977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.974990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.975259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.975272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.975408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.975420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.975556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.975568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.975683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.975695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.975877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.975889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.592 [2024-07-15 20:52:41.976011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.592 [2024-07-15 20:52:41.976022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.592 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.976158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.976169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.976341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.976354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.976591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.976603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.976712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.976724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.977006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.977018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.977204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.977216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.977462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.977476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.977732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.977744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.978009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.978021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.978257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.978269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.978451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.978462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.978649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.978662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.978788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.978799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.979057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.979069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.979305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.979317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.979449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.979461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.979599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.979611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.979798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.979811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.979922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.979932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.980192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.980205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.980410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.980422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.980543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.980555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.980679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.980691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.980803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.980815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.981001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.981012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.981278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.981290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.981586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.981598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.981713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.981725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.981870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.981881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.982075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.982088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.982278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.982290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.982473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.982484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.982720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.982732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.982995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.983006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.983261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.593 [2024-07-15 20:52:41.983273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.593 qpair failed and we were unable to recover it. 00:27:07.593 [2024-07-15 20:52:41.983475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.983486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.983675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.983686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.983820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.983832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.984015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.984028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.984137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.984150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.984323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.984335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.984513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.984525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.984638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.984650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.984840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.984851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.985155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.985167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.985375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.985387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.985519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.985531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.985719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.985731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.985848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.985859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.986030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.986043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.986223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.986241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.986443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.986455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.986648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.986659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.986831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.986844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.987048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.987060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.987255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.987267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.987389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.987401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.987601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.987615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.987802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.987814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.988005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.988017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.988193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.988204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.988488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.988500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.988688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.988700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.988879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.988891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.989061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.989073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.989249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.989261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.989544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.989557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.989806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.989817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.990049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.990060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.990306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.990318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.990429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.990441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.990637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.990648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.990901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.990912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.991102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.991113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.991372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.991385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.991516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.594 [2024-07-15 20:52:41.991528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.594 qpair failed and we were unable to recover it. 00:27:07.594 [2024-07-15 20:52:41.991669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.991680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.991869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.991881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.992058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.992070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.992208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.992220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.992459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.992470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.992686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.992697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.992825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.992837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.993044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.993055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.993265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.993279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.993458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.993470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.993599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.993611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.993868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.993880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.994090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.994102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.994391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.994404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.994664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.994676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.994804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.994816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.995011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.995023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.995180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.995192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.995390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.995401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.995590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.995602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.995709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.995720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.995923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.995938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.996201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.996213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.996420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.996432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.996570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.996581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.996760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.996773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.996971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.996983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.997172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.997185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.997290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.997301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.997441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.997453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.997720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.997732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.997920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.997932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.998132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.998144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.998343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.998355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.998487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.998499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.998687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.998699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.998875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.998888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.999144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.999157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.999353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.999365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.999604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.999616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:41.999790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.595 [2024-07-15 20:52:41.999802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.595 qpair failed and we were unable to recover it. 00:27:07.595 [2024-07-15 20:52:42.000060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.000073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.000363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.000375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.000567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.000579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.000705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.000716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.000984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.000996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.001181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.001192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.001481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.001493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.001625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.001637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.001770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.001782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.001981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.001992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.002171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.002182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.002358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.002371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.002548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.002560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.002745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.002757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.003048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.003060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.003164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.003175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.003382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.003394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.003576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.003588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.003757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.003769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.003891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.003902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.004163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.004177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.004442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.004454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.004576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.004587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.004710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.004721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.005001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.005012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.005243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.005255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.005491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.005503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.005751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.005763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.006039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.006050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.006324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.006337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.006545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.006557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.006745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.006758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.006979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.006991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.007236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.007248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.007383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.007395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.007540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.007551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.007807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.007818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.008021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.008033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.008244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.008257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.008375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.008387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.596 qpair failed and we were unable to recover it. 00:27:07.596 [2024-07-15 20:52:42.008570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.596 [2024-07-15 20:52:42.008582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.008769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.008781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.008974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.008986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.009174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.009186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.009448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.009460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.009585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.009598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.009778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.009789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.010092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.010129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.010364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.010392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.010545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.010562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.010695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.010710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.010943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.010959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.011207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.011222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.011479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.011494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.011688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.011703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.011971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.011986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.012194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.012209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.012470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.012486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.012641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.012656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.012843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.012858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.013051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.013066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.013339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.013355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.013552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.013566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.013764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.013779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.014100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.014115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.014318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.014335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.014536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.014551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.014749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.014764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.014977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.014992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.015125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.015141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.015420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.015436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.015615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.015630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.015807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.015822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.015956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.015971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.016221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.016245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.016408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.016423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.597 [2024-07-15 20:52:42.016621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.597 [2024-07-15 20:52:42.016636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.597 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.016824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.016839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.017057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.017072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.017260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.017276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.017541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.017556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.017741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.017756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.017876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.017891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.018168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.018184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.018415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.018431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.018631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.018646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.018970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.018985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.019188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.019204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.019399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.019416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.019597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.019613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.019806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.019821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.020099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.020115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.020277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.020292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.020492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.020507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.020647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.020663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.020938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.020953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.021150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.021165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.021384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.021400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.021533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.021549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.021669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.021685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.021939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.021954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.022134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.022152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.022292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.022308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.022501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.022517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.022781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.022796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.598 [2024-07-15 20:52:42.023002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.598 [2024-07-15 20:52:42.023017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.598 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.023264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.023280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.023483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.023501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.023689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.023704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.023849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.023864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.024190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.024206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.024444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.024460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.024641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.024656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.024777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.024792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.024926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.024942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.025088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.025104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.025389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.025406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.025537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.025553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.025771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.025786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.026025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.026040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.026239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.026255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.026528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.026543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.026835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.026851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.027042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.027057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.880 [2024-07-15 20:52:42.027208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.880 [2024-07-15 20:52:42.027223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.880 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.027411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.027427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.027562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.027578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.027778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.027793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.028095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.028116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.028360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.028378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.028536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.028552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.028766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.028782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.028902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.028919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.029141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.029157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.029431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.029447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.029589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.029606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.029834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.029849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.030101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.030117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.030239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.030253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.030530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.030546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.030757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.030773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.030909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.030924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.031146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.031180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.031401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.031420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.031619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.031632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.031830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.031842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.032118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.032131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.032314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.032326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.032529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.032541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.032728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.032741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.032949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.032961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.033148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.033160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.033362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.033375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.033564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.033575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.033710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.033722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.033839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.033854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.034103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.034116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.034304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.034316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.034493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.034505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.034623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.034635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.034840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.034853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.035133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.035145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.035327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.035339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.035474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.035487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.035723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.035736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.881 qpair failed and we were unable to recover it. 00:27:07.881 [2024-07-15 20:52:42.035862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.881 [2024-07-15 20:52:42.035873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.036068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.036080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.036270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.036283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.036539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.036551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.036838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.036850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.037058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.037070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.037292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.037304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.037439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.037451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.037579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.037591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.037826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.037839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.038100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.038111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.038389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.038403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.038582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.038594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.038781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.038793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.038969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.038982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.039157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.039169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.039363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.039375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.039528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.039547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.039760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.039776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.039970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.039985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.040188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.040204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.040397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.040414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.040565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.040581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.040764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.040780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.040910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.040925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.041113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.041129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.041376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.041391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.041588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.041603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.041846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.041862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.042048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.042063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.042256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.042277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.042538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.042553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.042685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.042700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.042891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.042907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.043087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.043103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.043335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.043351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.043615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.043630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.043771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.043787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.044029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.044045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.044160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.044175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.044375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.882 [2024-07-15 20:52:42.044391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.882 qpair failed and we were unable to recover it. 00:27:07.882 [2024-07-15 20:52:42.044635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.044650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.044856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.044872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.045130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.045146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.045339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.045354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.045476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.045491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.045628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.045643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.045760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.045774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.045996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.046013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.046259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.046275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.046460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.046475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.046682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.046697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.046900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.046915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.047158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.047174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.047350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.047366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.047634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.047649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.047864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.047880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.048095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.048117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.048257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.048275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.048475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.048490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.048737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.048753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.048947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.048963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.049185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.049199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.049392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.049404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.049616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.049628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.049756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.049768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.049899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.049911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.050145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.050157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.050363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.050375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.050493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.050505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.050677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.050690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.050864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.050875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.051062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.051074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.051288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.051299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.051445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.051458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.051652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.051664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.051804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.051816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.052108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.052119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.052323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.052336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.052511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.052523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.052703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.052714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.883 [2024-07-15 20:52:42.052939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.883 [2024-07-15 20:52:42.052951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.883 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.053140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.053152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.053277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.053291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.053417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.053429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.053643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.053655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.053824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.053837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.054034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.054046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.054182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.054194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.054372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.054385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.054518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.054530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.054656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.054668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.054926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.054938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.055046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.055057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.055175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.055187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.055475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.055488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.055680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.055691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.055826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.055838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.056092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.056105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.056361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.056373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.056489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.056501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.056652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.056663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.056843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.056855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.057162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.057174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.057412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.057424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.057650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.057662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.057851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.057863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.058107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.058120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.058359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.058372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.058542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.058554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.058689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.058702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.058901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.058913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.059122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.059134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.059318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.059330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.059568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.059580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.884 [2024-07-15 20:52:42.059720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.884 [2024-07-15 20:52:42.059733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.884 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.059863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.059875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.060060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.060073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.060348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.060360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.060474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.060485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.060625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.060636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.060827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.060839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.061076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.061088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.061266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.061279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.061414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.061426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.061601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.061614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.061887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.061900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.062067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.062079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.062341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.062353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.062593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.062606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.062846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.062859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.063145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.063158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.063346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.063358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.063497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.063509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.063705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.063717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.063853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.063865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.064041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.064054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.064318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.064331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.064524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.064536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.064727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.064739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.064926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.064938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.065199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.065211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.065439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.065451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.065712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.065724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.066053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.066065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.066251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.066264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.066407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.066419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.066625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.066638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.066823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.066835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.067108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.067121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.067359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.067372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.067562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.067574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.067694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.067705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.067984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.067996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.068252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.068264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.885 [2024-07-15 20:52:42.068472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.885 [2024-07-15 20:52:42.068483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.885 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.068689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.068702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.068985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.068997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.069156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.069168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.069303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.069315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.069504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.069516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.069776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.069789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.069983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.069995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.070172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.070183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.070360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.070373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.070545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.070556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.070740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.070751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.071027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.071039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.071160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.071172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.071364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.071377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.071484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.071496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.071679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.071692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.071977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.071989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.072166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.072178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.072414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.072426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.072597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.072610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.072747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.072759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.073126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.073140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.073326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.073338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.073478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.073490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.073678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.073690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.073832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.073845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.074040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.074052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.074234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.074247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.074386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.074398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.074538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.074550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.074735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.074747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.074871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.074883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.075174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.075186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.075390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.075402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.075596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.075607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.075778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.075790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.075992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.076004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.076149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.076161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.076469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.076482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.076723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.886 [2024-07-15 20:52:42.076734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.886 qpair failed and we were unable to recover it. 00:27:07.886 [2024-07-15 20:52:42.076877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.076889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.077128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.077140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.077310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.077322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.077491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.077503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.077626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.077638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.077934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.077946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.078190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.078202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.078444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.078456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.078758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.078770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.079093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.079106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.079289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.079301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.079507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.079518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.079659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.079672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.079847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.079859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.079995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.080007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.080234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.080246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.080385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.080397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.080588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.080600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.080776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.080788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.080976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.080988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.081243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.081256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.081446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.081460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.081658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.081670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.081796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.081808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.082042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.082055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.082323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.082336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.082530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.082542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.082728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.082740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.082860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.082872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.083001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.083013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.083201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.083213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.083348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.083360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.083484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.083496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.083733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.083746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.084023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.084036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.084213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.084230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.084505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.084517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.084686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.084699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.084812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.084824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.085030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.085043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.085267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.887 [2024-07-15 20:52:42.085280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.887 qpair failed and we were unable to recover it. 00:27:07.887 [2024-07-15 20:52:42.085529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.085541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.085681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.085694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.085876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.085888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.086052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.086064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.086250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.086264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.086502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.086514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.086646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.086658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.086789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.086802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.087002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.087015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.087138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.087149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.087342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.087355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.087526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.087538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.087679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.087691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.087913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.087925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.088036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.088048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.088153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.088164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.088391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.088404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.088668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.088680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.088905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.088916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.089178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.089190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.089381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.089395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.089582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.089595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.089788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.089800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.089998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.090011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.090247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.090259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.090403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.090416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.090545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.090557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.090675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.090687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.090905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.090917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.091114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.091126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.091306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.091318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.091432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.091444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.091624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.091636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.091818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.091831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.092089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.092100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.092338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.092350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.092482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.092495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.092619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.092631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.092752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.092764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.092973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.092985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.093121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.888 [2024-07-15 20:52:42.093134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.888 qpair failed and we were unable to recover it. 00:27:07.888 [2024-07-15 20:52:42.093331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.093343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.093515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.093528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.093690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.093702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.093876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.093888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.094054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.094066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.094242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.094255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.094371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.094383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.094494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.094506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.094692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.094703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.094899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.094912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.095106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.095117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.095323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.095335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.095525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.095537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.095730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.095741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.095931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.095942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.096104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.096116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.096350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.096363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.096489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.096501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2846199 Killed "${NVMF_APP[@]}" "$@" 00:27:07.889 [2024-07-15 20:52:42.096736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.096748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.096921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.096932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.097047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.097058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.097195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.097209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:07.889 [2024-07-15 20:52:42.097406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.097419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.097555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.097567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:07.889 [2024-07-15 20:52:42.097772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.097785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.097900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.097912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.889 [2024-07-15 20:52:42.098030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.098042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.098171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.098183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.889 [2024-07-15 20:52:42.098296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.098309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.098446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.098457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.889 [2024-07-15 20:52:42.098579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.098591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.098766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.098778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.889 qpair failed and we were unable to recover it. 00:27:07.889 [2024-07-15 20:52:42.098891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.889 [2024-07-15 20:52:42.098904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.099006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.099018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.099153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.099165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.099418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.099431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.099556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.099568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.099637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.099649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.099791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.099804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.099977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.099989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.100101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.100113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.100283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.100295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.100406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.100416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.100528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.100542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.100666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.100679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.100801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.100813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.100925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.100937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.101135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.101146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.101270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.101284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.101551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.101563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.101741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.101752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.101875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.101887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.101988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.101999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.102118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.102127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.102297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.102308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.102493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.102505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.102693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.102704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.102826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.102837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.102959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.102972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.103208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.103220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.103394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.103406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.103535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.103547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.103729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.103741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.103903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.103915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.104025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.104037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.104215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.104230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.104380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.104392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.104539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.104551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.104691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.104703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.104884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.104897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2847038 00:27:07.890 [2024-07-15 20:52:42.105089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.105102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 [2024-07-15 20:52:42.105287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.105299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.890 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2847038 00:27:07.890 [2024-07-15 20:52:42.105412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.890 [2024-07-15 20:52:42.105425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.890 qpair failed and we were unable to recover it. 00:27:07.891 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:07.891 [2024-07-15 20:52:42.105533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.105545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.105681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.105693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2847038 ']' 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.105830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.105849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.891 [2024-07-15 20:52:42.105986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.106002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.106118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.106133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.106261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.891 [2024-07-15 20:52:42.106277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.106368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.106385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.106464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.106479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.891 [2024-07-15 20:52:42.106600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.106618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.106758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.106774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.891 [2024-07-15 20:52:42.106944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.106962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.891 [2024-07-15 20:52:42.107198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.107213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.107368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.107380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.107555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.107567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.107668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.107680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.107797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.107808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.107985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.107997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.108188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.108200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.108347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.108360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.108471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.108487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.108672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.108685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.108794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.108806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.108921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.108933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.109052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.109064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.109246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.109259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.109393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.109405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.109523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.109535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.109649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.109666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.109799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.109812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.109944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.109956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.110069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.110081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.110202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.110214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.110404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.110417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.110552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.110565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.110740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.110753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.110919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.110932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.891 [2024-07-15 20:52:42.111055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.891 [2024-07-15 20:52:42.111067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.891 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.111186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.111198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.111315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.111328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.111457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.111470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.111577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.111589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.111704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.111717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.111833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.111844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.111960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.111972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.112094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.112106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.112281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.112294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.112366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.112379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.112500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.112512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.112652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.112664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.112782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.112794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.112912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.112924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.113108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.113120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.113231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.113243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.113419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.113431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.113561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.113573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.113691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.113703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.113803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.113815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.113932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.113945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.114072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.114084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.114281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.114294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.114424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.114436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.114549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.114560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.114693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.114704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.114832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.114844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.115079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.115090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.115261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.115273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.115365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.115375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.115490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.115500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.115617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.115628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.115819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.115830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.115936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.115947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.116136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.116147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.116385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.116397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.116509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.116519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.116656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.116668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.116787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.116798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.116999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.117011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.892 [2024-07-15 20:52:42.117250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.892 [2024-07-15 20:52:42.117262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.892 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.117439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.117451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.117624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.117636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.117771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.117782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.117980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.117991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.118169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.118181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.118309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.118321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.118547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.118558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.118743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.118755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.118930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.118941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.119120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.119132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.119326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.119339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.119464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.119475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.119761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.119773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.119914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.119925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.120111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.120123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.120245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.120263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.120525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.120537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.120730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.120741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.120921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.120933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.121175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.121186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.121443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.121454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.121631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.121644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.121819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.121830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.121951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.121963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.122164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.122175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.122296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.122307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.122429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.122441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.122565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.122577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.122692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.122703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.122973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.122985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.123105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.123117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.123246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.123258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.123429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.123440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.123636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.123647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.893 [2024-07-15 20:52:42.123916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.893 [2024-07-15 20:52:42.123928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.893 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.124178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.124190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.124318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.124329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.124475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.124487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.124670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.124682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.124919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.124929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.125055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.125066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.125187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.125198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.125447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.125458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.125593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.125605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.125727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.125738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.125975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.125986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.126092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.126104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.126288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.126300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.126424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.126436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.126570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.126582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.126754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.126767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.127006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.127018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.127250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.127262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.127378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.127390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.127533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.127544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.127776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.127787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.127971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.127982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.128186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.128197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.128443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.128455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.128640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.128652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.128770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.128781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.128961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.128973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.129106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.129119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.129413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.129424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.129611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.129622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.129739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.129751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.130055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.130066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.130171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.130184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.130364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.130376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.130556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.130568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.130740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.130752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.130951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.130962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.131069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.131080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.131272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.131284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.131464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.131476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.894 [2024-07-15 20:52:42.131607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.894 [2024-07-15 20:52:42.131619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.894 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.131860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.131872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.131977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.131988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.132126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.132138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.132366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.132378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.132569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.132580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.132790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.132802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.132997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.133009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.133195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.133207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.133467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.133480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.133603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.133615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.133732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.133744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.133911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.133923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.134034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.134047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.134170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.134182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.134446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.134458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.134710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.134721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.134927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.134939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.135166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.135178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.135360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.135373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.135564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.135576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.135783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.135795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.135921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.135932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.136092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.136104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.136214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.136231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.136464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.136476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.136666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.136680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.136952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.136964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.137176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.137187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.137306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.137318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.137574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.137587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.137773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.137785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.138056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.138067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.138254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.138266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.138458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.138470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.138616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.138628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.138815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.138827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.139096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.139108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.139235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.139248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.139485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.139497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.139629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.139641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.895 [2024-07-15 20:52:42.139900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.895 [2024-07-15 20:52:42.139911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.895 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.140090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.140101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.140375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.140386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.140519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.140531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.140670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.140681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.140800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.140812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.141053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.141065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.141239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.141251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.141383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.141395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.141572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.141585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.141716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.141728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.141966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.141978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.142153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.142165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.142293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.142304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.142473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.142484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.142672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.142685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.142891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.142904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.143135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.143148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.143295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.143307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.143560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.143572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.143718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.143730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.143868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.143880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.144121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.144133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.144360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.144372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.144498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.144509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.144636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.144651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.144840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.144852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.145037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.145049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.145219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.145236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.145427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.145440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.145546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.145558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.145696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.145708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.145978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.145990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.146173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.146185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.146360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.146372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.146491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.146503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.146629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.146641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.146862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.146874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.147067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.147080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.147321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.147333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.147453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.147465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.896 [2024-07-15 20:52:42.147606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.896 [2024-07-15 20:52:42.147617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.896 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.147801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.147813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.148016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.148027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.148297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.148310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.148488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.148499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.148689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.148700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.148902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.148914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.149037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.149050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.149222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.149245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.149447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.149459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.149647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.149658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.149838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.149852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.150040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.150052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.150286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.150297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.150415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.150428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.150616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.150629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.150788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.150800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.150906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.150917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.151223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.151241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.151427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.151439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.151549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.151560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.151758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.151769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.152055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.152067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.152250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.152262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.152383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.152395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.152541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.152554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.152751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.152763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.153034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.153047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.153285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.153297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.153489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.153501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.153636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.153648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.153920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.153932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.154155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.154166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.154288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.154300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.154486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.897 [2024-07-15 20:52:42.154498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.897 qpair failed and we were unable to recover it. 00:27:07.897 [2024-07-15 20:52:42.154585] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:07.897 [2024-07-15 20:52:42.154617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.154626] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type[2024-07-15 20:52:42.154629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b9=auto ] 00:27:07.898 0 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.154772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.154783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.154924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.154933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.155148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.155157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.155294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.155306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.155486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.155496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.155760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.155770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.155970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.155981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.156188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.156199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.156358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.156370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.156496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.156508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.156697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.156709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.156836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.156847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.157020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.157032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.157238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.157250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.157383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.157394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.157510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.157522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.157651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.157663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.157802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.157813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.158051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.158062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.158190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.158202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.158347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.158359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.158532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.158544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.158720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.158732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.158931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.158943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.159177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.159188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.159402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.159414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.159652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.159664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.159788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.159801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.159991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.160003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.160193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.160204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.160341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.160353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.160475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.160486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.160655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.160666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.160843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.160855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.161035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.161046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.161176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.161188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.161314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.161327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.161448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.161460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.898 [2024-07-15 20:52:42.161634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.898 [2024-07-15 20:52:42.161646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.898 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.161768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.161779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.161978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.161990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.162175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.162188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.162465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.162476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.162760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.162772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.162906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.162918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.163134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.163146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.163413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.163425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.163554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.163566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.163805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.163816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.163937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.163949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.164164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.164175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.164319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.164331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.164464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.164476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.164604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.164615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.164741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.164752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.164975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.164987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.165292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.165303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.165541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.165553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.165789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.165801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.165916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.165928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.166216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.166232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.166414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.166426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.166557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.166568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.166707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.166720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.166942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.166953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.167135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.167146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.167363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.167375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.167510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.167524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.167714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.167725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.167854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.167866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.168086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.168098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.168355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.168367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.168551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.168563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.168746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.168757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.168996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.169007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.169194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.169206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.169357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.169369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.169547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.169559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.169692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.899 [2024-07-15 20:52:42.169704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.899 qpair failed and we were unable to recover it. 00:27:07.899 [2024-07-15 20:52:42.169839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.169850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.169958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.169969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.170145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.170157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.170341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.170354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.170525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.170536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.170656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.170667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.170792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.170804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.170980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.170992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.171208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.171219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.171399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.171411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.171542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.171553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.171741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.171752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.171948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.171960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.172191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.172204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.172329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.172341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.172486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.172497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.172675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.172687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.172795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.172807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.172927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.172937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.173135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.173146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.173277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.173288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.173477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.173488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.173684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.173695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.173938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.173949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.174144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.174156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.174345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.174356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.174547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.174558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.174690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.174701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.174825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.174840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.175036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.175049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.175219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.175235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.175354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.175365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.175503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.175515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.175689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.175701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.175919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.175931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.176058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.176070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.176177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.176188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.176381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.176392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.176545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.176556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.900 [2024-07-15 20:52:42.176659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.900 [2024-07-15 20:52:42.176670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.900 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.176801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.176813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.177054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.177065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.177266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.177279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.177473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.177485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.177607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.177619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.177793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.177805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.178058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.178070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.178309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.178321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.178463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.178474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.178589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.178600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.178784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.178795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.178925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.178936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.179136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.179147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.179322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.179336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.179472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.179483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.179665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.179677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.179881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.179893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.180110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.180123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.180340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.180352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.180606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.180617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.180807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.180819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.181131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.181143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.181285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.181297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.181413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.181424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.181546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.181557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.901 [2024-07-15 20:52:42.181746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.181758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.181878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.181890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.182008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.182020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.182317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.182333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.182514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.182527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.182715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.182727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.182918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.182930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.183105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.183116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.183360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.183371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.183576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.183588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.183780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.183792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-07-15 20:52:42.183902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-07-15 20:52:42.183913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.184171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.184183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.184352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.184364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.184534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.184546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.184668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.184679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.184871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.184882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.185057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.185069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.185266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.185278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.185407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.185419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.185546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.185558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.185737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.185749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.185861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.185872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.186149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.186160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.186342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.186354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.186479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.186491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.186704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.186716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.186914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.186925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.187185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.187197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.187384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.187396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.187529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.187540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.187662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.187674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.187928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.187939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.188117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.188128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.188259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.188271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.188505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.188517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.188707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.188719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.188880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.188891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.189145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.189157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.189416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.189428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.189543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.189555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.189745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.189757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.189860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.189871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.189986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.189999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.190127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.190138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.190265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.190276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.190465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.190478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-07-15 20:52:42.190653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-07-15 20:52:42.190665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.190850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.190862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.191142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.191153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.191344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.191356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.191481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.191493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.191666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.191677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.191868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.191879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.192139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.192150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.192321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.192333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.192477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.192488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.192604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.192615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.192745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.192756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.192951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.192962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.193174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.193185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.193430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.193442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.193561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.193572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.193698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.193709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.193979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.193990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.194087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.194098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.194338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.194350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.194526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.194537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.194736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.194748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.194990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.195001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.195132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.195143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.195261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.195272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.195453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.195466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.195659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.195671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.195863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.195874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.196117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.196129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.196358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.196369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.196566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.196577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.196702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.196713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.196894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.196905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.197160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.197171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.197360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.197372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.197590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.197601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.197733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.197746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.197979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.197990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.198127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.198139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.198287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.198299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-07-15 20:52:42.198478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-07-15 20:52:42.198489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.198678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.198689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.198807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.198819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.199065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.199077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.199183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.199194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.199464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.199476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.199654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.199666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.199881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.199892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.200155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.200167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.200283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.200296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.200437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.200449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.200589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.200601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.200740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.200752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.200867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.200878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.201050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.201062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.201239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.201252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.201376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.201388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.201563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.201574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.201683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.201694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.201828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.201840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.202012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.202023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.202303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.202315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.202497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.202509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.202707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.202720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.202959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.202971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.203099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.203110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.203456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.203468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.203657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.203668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.203826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.203838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.203963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.203975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.204154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.204165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.204409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.204420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.204573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.204584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.204721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.204732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.204955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.204967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.205209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.205221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.205440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.205453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.205647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.205660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.205847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.205858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.206057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.206068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-07-15 20:52:42.206253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-07-15 20:52:42.206265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.206533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.206545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.206722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.206733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.206980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.206991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.207116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.207127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.207362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.207374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.207496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.207507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.207714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.207725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.207901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.207913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.208116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.208127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.208266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.208278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.208468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.208480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.208622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.208633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.208763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.208775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.208974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.208986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.209096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.209106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.209298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.209309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.209434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.209446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.209631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.209643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.209757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.209768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.210001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.210012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.210258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.210270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.210533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.210544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.210656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.210667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.210773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.210783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.210906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.210917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.211088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.211100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.211220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.211237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.211479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.211490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.211600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.211612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.211855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.211867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.212104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.212115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.212409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.212421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.212602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.212613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.212795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.212807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.212983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.212994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.213260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.213280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.213425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.213436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.213616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.213628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.213866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.213877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-07-15 20:52:42.214111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-07-15 20:52:42.214123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.214307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.214319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.214508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.214519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.214650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.214663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.214837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.214849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.215038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.215050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.215223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.215240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.215394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.215406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.215577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.215588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.215759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.215771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.216001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.216013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.216232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.216244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.216365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.216376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.216622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.216634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.216820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.216832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.217040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.217052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.217179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.217190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.217300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.217312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.217486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.217498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.217735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.217747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.217877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.217888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.218069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.218081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.218278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.218290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.218434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.218459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.218688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.218712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.218912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.218928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.219190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.219205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.219361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.219376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.219569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.219585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.219710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.219725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.219864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.219879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.220072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.220088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.220307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.220323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.220546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.220561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.220758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-07-15 20:52:42.220773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-07-15 20:52:42.221018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.221034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.221237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.221256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.221446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.221462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.221641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.221656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.221970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.221986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.222112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.222127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.222390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.222406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.222628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.222643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.222833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.222848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.223026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.223041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.223306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.223322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.223453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.223468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.223694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.223709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.223904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.223919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.224120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.224135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.224403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.224419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.224435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.907 [2024-07-15 20:52:42.224617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.224632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.224769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.224784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.225030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.225045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.225258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.225274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.225490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.225506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.225653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.225669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.225853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.225869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.226067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.226082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.226280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.226296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.226497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.226512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.226713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.226729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.227050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.227065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.227192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.227206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.227415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.227432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.227620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.227635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.227828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.227843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.228039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.228054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.228179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.228196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.228401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.228418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.228616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.228632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.228829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.228845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.229100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.229116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.229334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.229350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.229546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-07-15 20:52:42.229561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-07-15 20:52:42.229694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.229710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.229929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.229963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.230288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.230302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.230487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.230499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.230637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.230649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.230838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.230849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.231042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.231054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.231207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.231219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.231412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.231424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.231607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.231620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.231803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.231815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.231994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.232006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.232178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.232189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.232476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.232488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.232674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.232691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.232872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.233053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.233065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.233192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.233203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.233422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.233434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.233566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.233578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.233712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.233724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.233979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.233992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.234169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.234181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.234419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.234431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.234618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.234630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.234827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.234839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.235019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.235030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.235321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.235333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.235524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.235535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.235710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.235723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.235924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.235936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.236112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.236124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.236292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.236305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.236503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.236515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.236702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.236713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.236925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.236937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.237219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.237236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.237361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.237374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.237570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.237583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.237718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-07-15 20:52:42.237730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-07-15 20:52:42.237834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.237844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.238114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.238135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.238362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.238378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.238517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.238533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.238719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.238734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.239069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.239084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.239283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.239300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.239492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.239508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.239700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.239715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.240018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.240034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.240220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.240242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.240382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.240396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.240639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.240655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.240784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.240799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.241044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.241059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.241184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.241199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.241334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.241351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.241547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.241562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.241767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.241782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.242033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.242048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.242249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.242265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.242444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.242460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.242657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.242673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.242933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.242948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.243203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.243217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.243408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.243423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.243549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.243564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.243810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.243825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.244096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.244115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.244437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.244453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.244590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.244605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.244758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.244773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.245035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.245050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.245302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.245319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.245544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.245559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.245750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.245767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.245998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.246012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.246146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.246161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.246365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.246380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.246564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.246581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-07-15 20:52:42.246826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-07-15 20:52:42.246841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.247086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.247101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.247295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.247310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.247443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.247458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.247675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.247690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.247901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.247916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.248048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.248063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.248261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.248276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.248471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.248487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.248685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.248699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.248899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.248914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.249040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.249054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.249288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.249304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.249441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.249457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.249608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.249623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.249768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.249786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.250034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.250050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.250256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.250271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.250515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.250530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.250658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.250674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.250868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.250883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.251080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.251095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.251232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.251248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.251367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.251382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.251626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.251642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.251964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.251979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.252165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.252180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.252441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.252457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.252658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.252673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.253008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.253024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.253214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.253234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.253448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.253464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.253663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.253678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.253865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.253880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.254073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.254088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.254340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-07-15 20:52:42.254355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-07-15 20:52:42.254493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.254508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.254686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.254701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.254949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.254964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.255093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.255108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.255308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.255324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.255565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.255580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.255791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.255806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.256094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.256109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.256334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.256350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.256499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.256514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.256788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.256802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.256949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.256964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.257103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.257119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.257387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.257403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.257544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.257559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.257758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.257772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.257968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.257983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.258196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.258212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.258494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.258509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.258634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.258650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.258780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.258799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.258926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.258941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.259207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.259223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.259353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.259368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.259503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.259518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.259793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.259808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.260035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.260050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.260246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.260261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.260460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.260475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.260625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.260640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.260762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.260778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.260927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.260942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.261134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.261149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.261381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.261399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.261597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.261612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.261743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.261758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.261981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.261996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.262199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.262215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.262452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.262468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.262611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.262625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-07-15 20:52:42.262754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-07-15 20:52:42.262769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.263062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.263078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.263287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.263304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.263432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.263448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.263666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.263681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.263872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.263887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.264134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.264150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.264421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.264438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.264568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.264583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.264732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.265016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.265032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.265232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.265249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.265378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.265395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.265532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.265549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.265695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.265711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.265999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.266016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.266142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.266159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.266392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.266409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.266552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.266568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.266744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.266761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.267053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.267073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.267288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.267302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.267490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.267504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.267687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.267701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.267936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.267951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.268082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.268095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.268356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.268371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.268512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.268525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.268731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.268744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.268961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.268975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.269117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.269131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.269318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.269331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.269473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.269486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.269657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.269677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.269859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.269873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.270055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.270070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.270262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.270276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.270543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.270556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.270739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.270752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.270958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.270971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-07-15 20:52:42.271210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-07-15 20:52:42.271223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.271477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.271491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.271747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.271760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.272006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.272019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.272206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.272219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.272355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.272368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.272488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.272499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.272808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.272821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.272943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.272955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.273128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.273141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.273322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.273335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.273456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.273468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.273586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.273598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.273813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.273826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.273989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.274002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.274255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.274267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.274527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.274540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.274692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.274705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.274905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.274916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.275123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.275136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.275348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.275376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.275510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.275533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.275655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.275670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.275792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.275807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.276012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.276028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.276237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.276253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.276430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.276446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.276656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.276671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.276898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.276914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.277217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.277238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.277427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.277442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.277582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.277597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.277748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.277764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.277939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.277955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.278170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.278186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.278436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.278452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.278635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.278651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.278789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.278805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.279007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.279022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.279219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.279243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.279430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-07-15 20:52:42.279445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-07-15 20:52:42.279581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.279596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.279724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.279739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.279874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.279889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.280105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.280120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.280313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.280328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.280576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.280591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.280721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.280738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.280873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.280888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.281155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.281170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.281360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.281376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.281524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.281539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.281735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.281750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.282011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.282026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.282213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.282233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.282394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.282409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.282693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.282708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.282939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.282954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.283261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.283277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.283429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.283444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.283577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.283592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.283893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.283909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.284042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.284057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.284328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.284344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.284491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.284506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.284689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.284704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.284828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.284844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.285156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.285171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.285409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.285424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.285621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.285637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.285836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.285850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.286134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.286149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.286339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.286354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.286504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.286520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.286705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.286723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.286937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.286953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.287095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.287110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.287300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.287315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.287458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.287474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.287597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.287612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.287738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.287754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-07-15 20:52:42.287885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-07-15 20:52:42.287900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.288095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.288110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.288307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.288323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.288434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.288449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.288710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.288725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.289015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.289031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.289281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.289297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.289421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.289437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.289627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.289642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.289835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.289851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.290133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.290148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.290331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.290346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.290561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.290576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.290729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.290744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.290872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.290887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.291097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.291112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.291322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.291337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.291456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.291471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.291584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.291598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.291853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.291868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.292086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.292101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.292231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.292246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.292386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.292402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.292599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.292615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.292749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.292765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.293007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.293022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.293211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.293232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.293428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.293443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.293639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.293654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.293778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.293793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.293992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.294007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.294306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.294321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.294601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.294617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.294891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.294906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.295110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.295131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-07-15 20:52:42.295267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-07-15 20:52:42.295281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.295561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.295574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.295788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.295800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.295923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.295935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.296148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.296160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.296335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.296348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.296528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.296539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.296708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.296720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.296857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.296869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.297116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.297128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.297380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.297392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.297521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.297534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.297725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.297739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.298032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.298045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.298217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.298233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.298364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.298376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.298555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.298567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.298737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.298750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.298934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.298946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.299201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.299212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.299343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.299354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.299548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.299560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.299749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.299761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.300036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.300048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.300324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.300337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.300586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.300597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.300784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.300796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.301108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.301120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.301323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.301335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.301519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.301530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.301718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.301730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.302005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.302017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.302280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.302292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.302501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.302513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.302704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.302716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.302977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.302989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.303249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.303262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.303447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.303460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.303697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.303709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.303975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.303993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-07-15 20:52:42.304277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-07-15 20:52:42.304295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.304436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.304452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.304645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.304660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.304856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.304872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.305067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.305082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.305301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.305316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.305563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.305578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.305706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.305720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.306004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.306020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.306155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.306170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.306369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.306385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.306584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.306600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.306776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.306796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.307012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.307027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.307169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.307184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.307410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.307425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.307623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.307639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.307909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.307926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.308127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.308143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.308336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.308352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.308553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.308569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.308751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.308766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.308964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.308979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.309172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.309187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.309243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.917 [2024-07-15 20:52:42.309267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.917 [2024-07-15 20:52:42.309274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.917 [2024-07-15 20:52:42.309281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.917 [2024-07-15 20:52:42.309286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.917 [2024-07-15 20:52:42.309334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:07.917 [2024-07-15 20:52:42.309439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.309454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.309424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:07.917 [2024-07-15 20:52:42.309530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.917 [2024-07-15 20:52:42.309531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:07.917 [2024-07-15 20:52:42.309653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.309668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.309866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.309881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.310151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.310167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.310411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.310426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.310634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.310649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.310787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.310802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.311101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.311116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.311331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.311347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.311600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.311615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.311750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.311764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.312009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.312025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.312220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-07-15 20:52:42.312240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-07-15 20:52:42.312443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.312459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.312661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.312677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.312886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.312901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.313145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.313161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.313356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.313372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.313563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.313578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.313761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.313777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.313985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.314001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.314249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.314265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.314451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.314467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.314681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.314697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.314874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.314890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.315138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.315156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.315367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.315384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.315639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.315652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.315837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.315849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.315966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.315978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.316182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.316194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.316458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.316470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.316731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.316743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.316980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.316992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.317108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.317120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.317329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.317342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.317642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.317655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.317893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.317905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.318173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.318186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.318365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.318378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.318664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.318678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.318856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.318868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.319076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.319088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.319332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.319345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.319550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.319564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.319874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.319888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.320126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.320139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.320398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.320412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.320670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.320683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.320868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.320880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.321123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.321136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.321407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.321421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.321682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.321696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-07-15 20:52:42.321970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-07-15 20:52:42.321984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.322175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.322188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.322478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.322491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.322749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.322763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.322885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.322897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.323080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.323094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.323299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.323312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.323558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.323571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.323830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.323843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.324090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.324103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.324243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.324257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.324491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.324504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.324680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.324696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.324935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.324949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.325210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.325223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.325478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.325490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.325774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.325787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.325971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.325984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.326188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.326201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.326403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.326416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.326599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.326612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.326859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.326872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.327122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.327136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.327260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.327271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.327451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.327465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.327635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.327647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.327851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.327864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.328128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.328141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.328360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.328375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.328568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.328581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.328769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.328783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.328899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.328913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.329102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.329114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.329375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.329390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.329567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-07-15 20:52:42.329580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-07-15 20:52:42.329766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.329779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.329950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.329964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.330141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.330153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.330414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.330427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.330599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.330611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.330892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.330904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.331088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.331101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.331341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.331354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.331620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.331632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.331813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.331826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.332013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.332025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.332289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.332303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.332496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.332509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.332803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.332816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.333051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.333065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.333349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.333363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.333545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.333557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.333739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.333754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.334016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.334029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.334221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.334250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.334429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.334441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.334613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.334625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.334817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.334829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.335021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.335033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.335291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.335304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.335493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.335507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.335641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.335654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.335881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.335894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.336007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.336020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.336266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.336279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.336483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.336496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.336672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.336685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.336810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.336821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.337104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.337117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.337360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.337373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.337620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.337633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.337823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.337836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.338030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.338044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.338306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.338319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.338498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.338510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.338834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.338848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.339046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.339059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.339195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-07-15 20:52:42.339208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-07-15 20:52:42.339435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-07-15 20:52:42.339447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-07-15 20:52:42.339645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-07-15 20:52:42.339658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-07-15 20:52:42.339830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-07-15 20:52:42.339844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-07-15 20:52:42.340128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-07-15 20:52:42.340141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-07-15 20:52:42.340338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-07-15 20:52:42.340354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-07-15 20:52:42.340595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-07-15 20:52:42.340607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-07-15 20:52:42.340727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-07-15 20:52:42.340741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.340910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.340925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.341116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.341130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.341388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.341402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.341686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.341699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.341878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.341890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.342077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.342089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.342308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.342322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.342535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.342551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.342833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.342850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.343024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.343038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.343234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.343248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.343375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.343389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.343566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.343579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.343816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.343829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.344012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.344025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.344146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.344159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.344399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.344414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.344593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.344605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.344801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.344815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.344996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.345009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.345259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.345274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.345404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.345417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.345653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.345666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.345772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.345786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.345985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.345998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.346183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.346196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.346455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.346469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.346639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.346651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.346913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.346926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.347166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.347179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.347306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.347318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.347487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.347499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.347775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.347787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.348045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.348057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.348301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.348337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.200 qpair failed and we were unable to recover it. 00:27:08.200 [2024-07-15 20:52:42.348591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.200 [2024-07-15 20:52:42.348607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.348814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.348830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.349097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.349112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.349328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.349344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.349563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.349578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.349765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.349780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.349908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.349923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.350147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.350162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.350312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.350329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.350601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.350617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.350817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.350832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.351013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.351028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.351215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.351235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.351487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.351502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.351809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.351824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.352083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.352098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.352277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.352293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.352489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.352505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.355484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.355508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.355823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.355841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.356081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.356097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.356306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.356323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.356633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.356649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.356892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.356907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.357182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.357198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.357448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.357465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.357759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.357778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.358019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.358032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.358229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.358242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.358464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.358475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.358670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.358682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.358942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.358954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.359165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.359177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.359443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.359457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.359693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.359706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.359899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.359912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.360175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.360187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.360322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.360333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.360609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.360621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.360740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.360754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.360973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.360986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.201 [2024-07-15 20:52:42.361248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.201 [2024-07-15 20:52:42.361262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.201 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.361534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.361549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.361736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.361748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.361917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.361929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.362105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.362118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.362308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.362323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.362501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.362514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.362707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.362721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.362942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.362955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.363198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.363212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.363442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.363457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.363651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.363664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.363869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.363880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.364075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.364089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.364268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.364281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.364405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.364417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.364661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.364673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.364884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.364897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.365177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.365190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.365316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.365329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.365502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.365515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.365814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.365827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.365953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.365965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.366158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.366170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.366407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.366420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.366561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.366575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.366812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.366824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.366995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.367006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.367173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.367185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.367462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.367475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.367636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.367648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.367784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.367795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.367978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.367989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.368231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.368243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.368445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.368457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.368717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.368729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.368927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.368939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.369195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.369207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.369475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.369488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.369731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.369743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.370038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.370050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.202 qpair failed and we were unable to recover it. 00:27:08.202 [2024-07-15 20:52:42.370236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.202 [2024-07-15 20:52:42.370248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.370510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.370522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.370724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.370735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.371015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.371027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.371262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.371274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.371409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.371421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.371613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.371625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.371827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.371839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.372018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.372030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.372301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.372314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.372555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.372567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.372805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.372817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.373020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.373031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.373233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.373245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.373375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.373388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.373498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.373510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.373694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.373706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.373943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.373956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.374137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.374150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.374416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.374428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.374634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.374646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.374882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.374893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.375066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.375078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.375316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.375329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.375444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.375459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.375719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.375731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.375929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.375941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.376153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.376165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.376422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.376434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.376640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.376652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.376835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.376846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.377022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.377035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.377301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.377312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.377573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.377585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.377753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.377765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.378002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.378013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.378275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.378287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.378492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.378504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.378700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.378712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.378996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.379008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.379268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.379281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.203 qpair failed and we were unable to recover it. 00:27:08.203 [2024-07-15 20:52:42.379512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.203 [2024-07-15 20:52:42.379524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.379749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.379760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.379962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.379974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.380181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.380193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.380325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.380338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.380550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.380562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.380795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.380807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.380978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.380990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.381207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.381218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.381341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.381353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.381590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.381601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.381839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.381851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.382031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.382043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.382307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.382319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.382509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.382521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.382779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.382791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.382958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.382969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.383231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.383243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.383432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.383444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.383731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.383743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.383854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.383867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.384035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.384046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.384217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.384233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.384369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.384382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.384582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.384595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.384797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.384808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.385071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.385083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.385249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.385261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.385522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.385534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.385806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.385817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.385954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.385966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.386170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.386182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.386307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.386320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.204 [2024-07-15 20:52:42.386530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.204 [2024-07-15 20:52:42.386542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.204 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.386668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.386680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.386914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.386926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.387041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.387054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.387320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.387332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.387564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.387576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.387752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.387764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.387870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.387882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.388075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.388087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.388270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.388282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.388493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.388505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.388742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.388753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.388926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.388937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.389068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.389080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.389283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.389295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.389482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.389493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.389682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.389694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.389950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.389961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.390088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.390100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.390358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.390371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.390621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.390632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.390876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.390887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.391002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.391015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.391274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.391286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.391553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.391564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.391748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.391759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.391926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.391938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.392176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.392188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.392367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.392380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.392573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.392585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.392698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.392712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.392894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.392906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.393073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.393085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.393347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.393359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.393616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.393628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.393742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.393755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.393921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.393932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.394117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.394129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.394302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.394314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.394576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.394589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.394769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.394780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.205 [2024-07-15 20:52:42.394899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.205 [2024-07-15 20:52:42.394911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.205 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.395084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.395096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.395297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.395309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.395548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.395561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.395849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.395861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.396105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.396116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.396298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.396310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.396573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.396586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.396826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.396837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.397010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.397021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.397193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.397205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.397395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.397407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.397598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.397609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.397834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.397847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.398125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.398137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.398298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.398311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.398572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.398584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.398696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.398707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.398887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.398898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.399144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.399156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.399332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.399344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.399457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.399469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.399595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.399607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.399811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.399823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.399994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.400006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.400178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.400189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.400422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.400434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.400614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.400625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.400873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.400884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.401072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.401086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.401188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.401201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.401383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.401395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.401565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.401577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.401783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.401794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.402046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.402057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.402313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.402325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.402449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.402461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.402600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.402612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.402792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.402804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.402994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.403006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.403254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.206 [2024-07-15 20:52:42.403266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.206 qpair failed and we were unable to recover it. 00:27:08.206 [2024-07-15 20:52:42.403523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.403535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.403740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.403752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.403875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.403887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.404090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.404102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.404369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.404382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.404561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.404572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.404779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.404790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.404919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.404931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.405054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.405066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.405259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.405271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.405504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.405515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.405714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.405725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.405849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.405861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.405979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.405991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.406112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.406125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.406237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.406250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.406472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.406484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.406672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.406684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.406923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.406934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.407176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.407188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.407365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.407377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.407583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.407595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.407767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.407779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.407941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.407952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.408207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.408219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.408460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.408472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.408728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.408740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.408910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.408922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.409163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.409177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.409414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.409426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.409619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.409632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.409823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.409835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.410007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.410019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.410252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.410265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.410473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.410484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.410671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.410683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.410868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.410879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.411115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.411127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.411314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.411326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.411495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.411508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.411760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.207 [2024-07-15 20:52:42.411773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.207 qpair failed and we were unable to recover it. 00:27:08.207 [2024-07-15 20:52:42.411938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.411950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.412146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.412158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.412347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.412360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.412612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.412624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.412856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.412867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.413126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.413137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.413324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.413337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.413549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.413560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.413743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.413754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.413944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.413956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.414159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.414170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.414379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.414391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.414560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.414572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.414816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.414827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.414995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.415007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.415115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.415127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.415330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.415342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.415516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.415529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.415787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.415798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.415985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.415996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.416254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.416266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.416456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.416468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.416714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.416726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.416882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.416894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.417128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.417139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.417260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.417272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.417505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.417516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.417649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.417663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.417895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.417906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.418142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.418153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.418371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.418383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.418593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.418606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.418864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.418875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.419044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.419056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.419290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.419303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.208 qpair failed and we were unable to recover it. 00:27:08.208 [2024-07-15 20:52:42.419562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.208 [2024-07-15 20:52:42.419573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.419760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.419772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.420008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.420020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.420303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.420316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.420434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.420446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.420674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.420685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.420819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.420831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.421045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.421057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.421232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.421244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.421418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.421429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.421616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.421628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.421816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.421827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.422061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.422072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.422333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.422346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.422585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.422596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.422829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.422840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.423005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.423017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.423202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.423214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.423403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.423416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.423578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.423620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.423850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.423876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.424024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.424040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.424233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.424248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.424459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.424474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.424672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.424686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.424928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.424944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.425148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.425163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.425290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.425306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.425491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.425506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.425752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.425767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.425982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.425997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.426201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.426216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.426481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.426501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.426694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.426709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.426895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.426911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.427082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.427097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.427276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.427292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.427481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.427496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.427746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.427761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.427929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.427944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.428137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.428152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.209 qpair failed and we were unable to recover it. 00:27:08.209 [2024-07-15 20:52:42.428367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.209 [2024-07-15 20:52:42.428383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.428640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.428656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.428849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.428864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.429041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.429055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.429137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.429152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.429406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.429422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.429619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.429634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.429766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.429781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.429921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.429937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.430130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.430145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.430328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.430344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.430530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.430546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.430724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.430739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.430860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.430876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.431006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.431021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.431269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.431285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.431463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.431478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.431671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.431685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.431818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.431837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.432085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.432100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.432297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.432325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.432447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.432463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.432597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.432612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.432739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.432755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.432934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.432949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.433139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.433154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.433297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.433312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.433519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.433535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.433748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.433763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.433904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.433920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.434191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.434207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.434335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.434355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.434552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.434567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.434813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.434828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.434953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.434969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.435249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.435265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.435496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.435512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.435730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.435745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.435877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.435892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.436071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.436086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.436196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.210 [2024-07-15 20:52:42.436211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.210 qpair failed and we were unable to recover it. 00:27:08.210 [2024-07-15 20:52:42.436529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.436543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.436783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.436795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.436910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.436922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.437083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.437095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.437290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.437302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.437489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.437500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.437678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.437690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.437935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.437946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.438085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.438096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.438288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.438299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.438539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.438550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.438720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.438732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.438929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.438941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.439109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.439121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.439317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.439329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.439512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.439524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.439675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.439687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.439821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.439833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.440130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.440142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.440378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.440390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.440517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.440529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.440789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.440801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.440978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.440990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.441118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.441130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.441310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.441323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.441439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.441451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.441573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.441585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.441753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.441765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.441896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.441908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.442081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.442092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.442218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.442236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.442425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.442436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.442549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.442560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.442747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.442759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.442929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.442941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.443126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.443138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.443237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.443249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.443368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.443380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.443569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.443581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.443774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.443786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.443990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.211 [2024-07-15 20:52:42.444002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.211 qpair failed and we were unable to recover it. 00:27:08.211 [2024-07-15 20:52:42.444117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.444129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.444364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.444375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.444505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.444517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.444628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.444641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.444875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.444887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.445145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.445156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.445339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.445351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.445567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.445579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.445862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.445874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.446119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.446131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.446339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.446351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.446611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.446623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.446837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.446849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.447102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.447114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.447335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.447347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.447631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.447642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.447837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.447849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.448061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.448073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.448261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.448274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.448388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.448401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.448535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.448546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.448757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.448768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.449047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.449060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.449244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.449257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.449468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.449480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.449690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.449702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.449938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.449950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.450182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.450193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.450321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.450333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.450600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.450614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.450783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.450795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.450956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.450967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.451213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.451230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.451417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.451429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.451684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.451696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.451900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.212 [2024-07-15 20:52:42.451912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.212 qpair failed and we were unable to recover it. 00:27:08.212 [2024-07-15 20:52:42.452098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.452110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.452354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.452366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.452602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.452614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.452855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.452866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.452991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.453002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.453237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.453249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.453380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.453392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.453578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.453590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.453836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.453848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.454034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.454046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.454342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.454354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.454463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.454474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.454715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.454727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.454938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.454949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.455235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.455247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.455368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.455381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.455614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.455625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.455742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.455753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.455885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.455897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.456002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.456014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.456252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.456263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.456451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.456462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.456646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.456657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.456763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.456774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.456989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.457001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.457214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.457229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.457369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.457382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.457656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.457668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.457837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.457849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.458105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.458116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.458376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.458388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.458644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.458656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.458854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.458865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.458992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.459005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.459252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.459263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.459504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.459516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.459778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.459790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.459976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.459988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.460246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.460258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.460497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.460508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.213 [2024-07-15 20:52:42.460677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.213 [2024-07-15 20:52:42.460690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.213 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.460909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.460921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.461162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.461173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.461379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.461390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.461594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.461605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.461891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.461903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.462086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.462098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.462347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.462360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.462619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.462630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.462843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.462855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.463103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.463115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.463406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.463418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.463630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.463642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.463905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.463917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.464093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.464105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.464385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.464398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.464566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.464578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.464768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.464780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.464899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.464910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.465092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.465103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.465311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.465323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.465576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.465588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.465871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.465882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.466003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.466015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.466273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.466285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.466557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.466569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.466784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.466795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.467067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.467079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.467259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.467270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.467504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.467516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.467752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.467764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.468045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.468056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.468231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.468244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.468446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.468458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.468567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.468579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.468767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.468779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.469044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.469056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.469334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.469346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.469527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.469539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.469745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.469757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.469885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.469897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.470182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.214 [2024-07-15 20:52:42.470194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.214 qpair failed and we were unable to recover it. 00:27:08.214 [2024-07-15 20:52:42.470475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.470487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.470730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.470742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.470989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.471002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.471259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.471271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.471452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.471464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.471721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.471732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.471994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.472006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.472244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.472256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.472495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.472506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.472699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.472711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.472831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.472844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.473028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.473040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.473282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.473295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.473483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.473495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.473664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.473676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.473937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.473949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.474247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.474259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.474452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.474464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.474748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.474762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.474884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.474896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.475151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.475162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.475423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.475435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.475674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.475686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.475944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.475956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.476210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.476222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.476480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.476492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.476706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.476718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.476957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.476969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.477156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.477167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.477398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.477411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.477598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.477609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.477781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.477793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.477969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.477980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.478241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.478253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.478361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.478373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.478627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.478638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.478911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.478922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.479126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.479138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.479320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.479332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.479611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.479623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.215 [2024-07-15 20:52:42.479791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.215 [2024-07-15 20:52:42.479803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.215 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.479982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.479993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.480255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.480267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.480441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.480452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.480712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.480724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.480963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.480974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.481210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.481223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.481494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.481505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.481617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.481628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.481883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.481895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.482166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.482178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.482350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.482363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.482623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.482635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.482834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.482846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.483024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.483036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.483231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.483243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.483429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.483441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.483664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.483675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.483846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.483860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.484091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.484103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.484384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.484395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.484568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.484580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.484819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.484830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.485016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.485028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.485196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.485207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.485475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.485487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.485699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.485712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.485890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.485902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.486089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.486101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.486338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.486350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.486611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.486623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.486877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.486888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.487165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.487177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.487391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.487403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.487665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.487677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.487849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.487861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.488027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.488040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.488140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.488152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.488333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.488346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.488595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.488607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.488737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.488749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.216 [2024-07-15 20:52:42.488919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.216 [2024-07-15 20:52:42.488930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.216 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.489108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.489120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.489322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.489334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.489522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.489534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.489774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.489786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.489989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.490002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.490206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.490217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.490330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.490343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.490532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.490543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.490803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.490814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.491098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.491110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.491374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.491386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.491642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.491654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.491885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.491897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.492065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.492077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.492319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.492331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.492523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.492535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.492703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.492716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.492899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.492911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.493083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.493095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.493352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.493364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.493558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.493570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.493827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.493838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.494101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.494112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.494314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.494326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.494602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.494613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.494847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.494859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.495072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.495084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.495345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.495357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.495596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.495608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.495878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.495889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.496090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.496102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.496297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.496309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.496543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.496555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.217 [2024-07-15 20:52:42.496770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.217 [2024-07-15 20:52:42.496783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.217 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.496993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.497005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.497266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.497278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.497446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.497458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.497720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.497732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.497994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.498006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.498292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.498304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.498564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.498575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.498744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.498756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.498988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.498999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.499180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.499192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.499455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.499467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.499677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.499688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.499891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.499904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.500177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.500188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.500366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.500378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.500640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.500652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.500914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.500926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.501161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.501172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.501295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.501308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.501493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.501504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.501762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.501773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.501877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.501889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.502100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.502113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.502387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.502399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.502610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.502622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.502821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.502833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.503067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.503078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.503312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.503324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.503506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.503518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.503780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.503792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.503982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.503994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.504183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.504195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.504429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.504441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.504702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.504714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.504888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.504900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.505133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.505144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.505350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.505362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.505623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.505635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.505815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.505827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.505995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.506006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.218 [2024-07-15 20:52:42.506219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.218 [2024-07-15 20:52:42.506234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.218 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.506488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.506500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.506641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.506652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.506821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.506832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.507007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.507020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.507254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.507266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.507536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.507549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.507672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.507684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.507793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.507805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.507973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.507985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.508166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.508177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.508439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.508451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.508620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.508631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.508879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.508890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.509145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.509157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.509408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.509420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.509666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.509677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.509943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.509955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.510088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.510100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.510286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.510298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.510486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.510498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.510693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.510704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.510899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.510913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.511165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.511177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.511359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.511371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.511629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.511641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.511829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.511840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.512100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.512112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.512348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.512359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.512534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.512546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.512805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.512817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.513021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.513032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.513316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.513328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.513496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.513509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.513746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.513759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.513960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.513972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.514158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.514170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.514401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.514413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.514584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.514596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.514765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.514777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.514956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.514968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.219 qpair failed and we were unable to recover it. 00:27:08.219 [2024-07-15 20:52:42.515236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.219 [2024-07-15 20:52:42.515248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.515498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.515510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.515679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.515690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.515923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.515934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.516168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.516179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.516348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.516361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.516619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.516630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.516839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.516850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.517130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.517141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.517332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.517345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.517589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.517600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.517710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.517722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.517931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.517943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.518123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.518135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.518330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.518342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.518609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.518621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.518816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.518828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.519097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.519109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.519367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.519379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.519614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.519626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.519810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.519822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.519993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.520007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.520222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.520237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.520427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.520439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.520650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.520662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.520922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.520934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.521205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.521217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.521481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.521493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.521684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.521696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.521800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.521811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.521931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.521943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.522202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.522214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.522472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.522483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.522671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.522683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.522942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.522954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.523211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.523223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.523418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.523430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.523566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.523578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.523862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.523874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.524134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.524146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.524382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.220 [2024-07-15 20:52:42.524394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.220 qpair failed and we were unable to recover it. 00:27:08.220 [2024-07-15 20:52:42.524607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.524619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.524874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.524886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.525092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.525103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.525382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.525395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.525579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.525591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.525800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.525812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.525927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.525939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.526112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.526124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.526308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.526320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.526607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.526620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.526860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.526872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.527135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.527147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.527403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.527415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.527671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.527684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.527869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.527881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.528136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.528148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.528404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.528416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.528656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.528669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.528924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.528937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.529120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.529132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.529366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.529380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.529549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.529561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.529747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.529759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.529993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.530005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.530180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.530192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.530402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.530414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.530660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.530672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.530857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.530869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.531067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.531078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.531313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.531325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.531444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.531456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.531691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.531702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.531876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.531888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.532024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.532036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.532252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.532265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.221 [2024-07-15 20:52:42.532394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.221 [2024-07-15 20:52:42.532406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.221 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.532620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.532631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.532865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.532876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.533058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.533069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.533324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.533336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.533598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.533610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.533847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.533859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.534112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.534124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.534384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.534396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.534635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.534647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.534902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.534913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.535035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.535047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.535285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.535297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.535413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.535425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.535661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.535673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.535906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.535918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.536038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.536050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.536221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.536243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.536507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.536519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.536706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.536717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.537015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.537027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.537212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.537227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.537491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.537503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.537739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.537751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.537918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.537930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.538176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.538190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.538445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.538458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.538740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.538751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.538987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.538999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.539258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.539270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.539527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.539539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.539709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.539721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.539915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.539926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.540179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.540191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.540452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.540463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.540589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.540601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.540732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.540744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.540958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.540970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.541142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.541154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.541343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.541355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.541546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.541558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.222 [2024-07-15 20:52:42.541739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.222 [2024-07-15 20:52:42.541751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.222 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.542028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.542040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.542156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.542168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.542405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.542417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.542678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.542690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.542968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.542979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.543244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.543256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.543427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.543439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.543701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.543714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.543835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.543847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.544075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.544087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.544308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.544328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.544584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.544600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.544817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.544832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.545098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.545113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.545300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.545316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.545501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.545517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.545700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.545716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.545985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.546000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.546296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.546311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.546585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.546600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.546871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.546886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.547128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.547143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.547322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.547337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.547586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.547604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.547805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.547820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.548109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.548124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.548382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.548398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.548659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.548675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.548856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.548872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.549067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.549082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.549342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.549357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.549577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.549592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.549780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.549795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.550039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.550055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.550239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.550255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.550450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.550465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.550661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.550676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.550950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.550965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.551148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.551163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.551454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.551470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.223 [2024-07-15 20:52:42.551714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.223 [2024-07-15 20:52:42.551729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.223 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.551944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.551959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.552255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.552270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.552550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.552565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.552850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.552865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.553062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.553077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.553217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.553241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.553386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.553402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.553672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.553687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.553869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.553885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.554012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.554025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.554213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.554230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.554342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.554354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.554563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.554574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.554682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.554694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.554875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.554886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.554990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.555002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.555123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.555135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.555389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.555402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.555658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.555670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.555910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.555922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.556109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.556120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.556357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.556369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.556546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.556561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.556744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.556756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.556932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.556944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.557144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.557155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.557444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.557457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.557671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.557682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.557938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.557949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.558192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.558204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.558381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.558393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.558572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.558583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.558817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.558829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.559070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.559081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.559350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.559362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.559607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.559619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.559804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.559816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.559930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.559942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.560177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.560189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.560427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.560440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.224 [2024-07-15 20:52:42.560545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.224 [2024-07-15 20:52:42.560557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.224 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.560848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.560859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.561116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.561127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.561251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.561262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.561381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.561393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.561635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.561646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.561931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.561942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.562175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.562187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.562291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.562303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.562483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.562495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.562676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.562688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.562949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.562961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.563220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.563235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.563370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.563381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.563626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.563637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.563821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.563833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.564018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.564030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.564148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.564160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.564399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.564411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.564537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.564548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.564717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.564731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.564853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.564864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.565044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.565057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.565313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.565323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.565511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.565522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.565731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.565741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.565993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.566003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.566286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.566298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.566492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.566503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.566783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.566793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.567050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.567059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.567259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.567270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.567529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.567540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.567717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.567726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.567984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.567995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.568200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.225 [2024-07-15 20:52:42.568209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.225 qpair failed and we were unable to recover it. 00:27:08.225 [2024-07-15 20:52:42.568483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.568495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.568664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.568674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.568800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.568811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.569068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.569078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.569342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.569352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.569536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.569546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.569735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.569745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.569990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.569999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.570186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.570196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.570430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.570439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.570698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.570707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.570994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.571004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.571212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.571221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.571480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.571490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.571773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.571783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.572025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.572034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.572272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.572282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.572539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.572548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.572688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.572698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.572952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.572962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.573219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.573232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.573481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.573490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.573707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.573717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.574003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.574013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.574275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.574286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.574518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.574528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.574677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.574689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.574944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.574953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.575192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.575202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.575371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.575381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.575562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.575572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.575810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.575819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.576006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.576016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.576272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.576282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.576480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.576490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.576672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.576682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.576964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.576973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.577261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.577271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.577466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.577475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.577765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.577775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.577985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.226 [2024-07-15 20:52:42.577995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.226 qpair failed and we were unable to recover it. 00:27:08.226 [2024-07-15 20:52:42.578183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.578192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.578451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.578462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.578727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.578737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.578951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.578961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.579133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.579142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.579312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.579321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.579561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.579571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.579696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.579706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.580019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.580028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.580284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.580294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.580552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.580562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.580686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.580695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.580922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.580931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.581219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.581231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.581488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.581497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.581789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.581799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.581965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.581975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.582239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.582250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.582522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.582532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.582714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.582724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.582959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.582968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.583203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.583212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.583435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.583445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.583574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.583584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.583702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.583712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.583909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.583920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.584090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.584099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.584332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.584342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.584600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.584610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.584876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.584886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.585166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.585176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.585361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.585371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.585629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.585638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.585896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.585906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.586166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.586176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.586393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.586403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.586638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.586647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.586774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.586783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.586969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.586978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.587237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.227 [2024-07-15 20:52:42.587247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.227 qpair failed and we were unable to recover it. 00:27:08.227 [2024-07-15 20:52:42.587505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.587515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.587759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.587768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.587956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.587965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.588229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.588239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.588483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.588493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.588661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.588670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.588838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.588848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.589143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.589153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.589354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.589364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.589554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.589564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.589772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.589782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.589955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.589965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.590271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.590283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.590496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.590505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.590759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.590769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.590971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.590981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.591162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.591171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.591342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.591352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.591567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.591577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.591755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.591765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.592000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.592010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.592176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.592186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.592354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.592364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.592626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.592636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.592891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.592900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.593142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.593152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.593411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.593421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.593620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.593629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.593800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.593809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.593977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.593987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.594220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.594237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.594437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.594447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.594681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.594690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.594954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.594964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.595228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.595238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.595413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.595422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.595623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.595633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.595802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.595812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.596079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.596088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.596325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.596336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.596507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.228 [2024-07-15 20:52:42.596516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.228 qpair failed and we were unable to recover it. 00:27:08.228 [2024-07-15 20:52:42.596648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.596657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.596918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.596928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.597037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.597046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.597235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.597245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.597436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.597445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.597625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.597635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.597902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.597912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.598161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.598171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.598420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.598429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.598598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.598608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.598846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.598855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.599051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.599062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.599265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.599275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.599553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.599562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.599828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.599838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.600041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.600051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.600334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.600344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.600609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.600618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.600861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.600870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.600990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.601000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.601255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.601264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.601477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.601487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.601690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.601700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.601888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.601897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.602145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.602155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.602267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.602277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.602458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.602468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.602634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.602644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.602905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.602915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.603125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.603135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.603337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.603347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.603530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.603540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.603795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.603805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.603998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.604008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.604210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.604220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.604395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.604405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.604666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.229 [2024-07-15 20:52:42.604675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.229 qpair failed and we were unable to recover it. 00:27:08.229 [2024-07-15 20:52:42.604878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.604888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.605092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.605102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.605281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.605291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.605502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.605512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.605711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.605720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.605915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.605925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.606174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.606184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.606396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.606406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.606588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.606597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.606748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.606758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.606928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.606938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.607068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.607077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.607245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.607255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.607464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.607474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.607657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.607669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.607835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.607845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.608008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.608017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.608222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.608235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.608415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.608424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.608677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.608686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.608856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.608866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.609055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.609065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.609236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.609246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.609385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.609395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.609655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.609665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.609971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.609981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.610231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.610241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.610499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.610509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.610679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.610689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.610803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.610813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.611072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.611081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.611266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.611275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.611534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.611544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.611807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.611817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.611995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.612005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.612241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.612251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.612486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.612496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.612676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.612686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.612872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.230 [2024-07-15 20:52:42.612882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.230 qpair failed and we were unable to recover it. 00:27:08.230 [2024-07-15 20:52:42.613155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.613165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.613402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.613413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.613672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.613682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.613948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.613957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.614139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.614149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.614405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.614415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.614616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.614626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.614881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.614890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.615149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.615159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.615403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.615413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.615521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.615532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.615786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.615795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.616074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.616084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.616189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.616198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.616486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.616496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.616698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.616709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.616878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.616888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.617154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.617163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.617356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.617367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.617579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.617588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.617823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.617833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.618095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.618105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.618314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.618323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.618622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.618631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.618886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.618896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.619102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.619112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.619290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.619300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.619580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.619590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.619702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.619711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.619926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.619935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.620176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.620185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.620377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.620387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.620504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.620514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.620748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.620757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.620951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.620960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.621201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.621211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.621417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.621427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.621610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.621620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.621795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.621805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.621996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.622005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.231 [2024-07-15 20:52:42.622268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.231 [2024-07-15 20:52:42.622278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.231 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.622474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.622484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.622764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.622774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.622908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.622918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.623117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.623126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.623329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.623340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.623625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.623634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.623897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.623907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.624168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.624177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.624415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.624425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.624611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.624622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.624856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.624865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.625069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.625079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.625315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.625326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.625585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.625595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.625829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.625841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.626097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.626107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.626222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.626235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.626496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.626506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.626739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.626748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.626915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.626925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.627178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.627188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.627448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.627458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.627691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.627700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.627881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.627891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.628173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.628182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.628471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.628481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.628721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.628731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.628981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.628990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.629256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.629266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.629449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.629459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.629694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.629703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.629935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.629944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.630067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.630076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.630249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.630259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.630513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.630522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.630692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.630701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.630981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.630990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.631247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.631257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.631475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.631484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.631755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.631765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.232 qpair failed and we were unable to recover it. 00:27:08.232 [2024-07-15 20:52:42.631951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.232 [2024-07-15 20:52:42.631960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.632216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.632228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.632423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.632433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.632692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.632702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.632888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.632898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.633137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.633146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.633401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.633411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.633672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.633681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.633941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.633951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.634186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.634196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.634456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.634466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.634636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.634645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.634821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.634831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.635115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.635125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.635401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.635413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.635662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.635671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.635921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.635931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.636168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.636178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.636364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.636374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.636635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.636644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.636838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.636847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.637137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.637147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.637411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.637421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.637605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.637614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.637810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.637820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.638010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.638019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.638288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.638298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.638531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.638540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.638726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.638736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.638999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.639008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.639265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.639275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.639460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.639469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.639702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.639712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.639970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.639980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.640153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.640162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.640435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.640445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.640703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.640713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.640952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.640962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.641229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.641239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.641500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.641509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.641689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.641698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.233 [2024-07-15 20:52:42.641961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.233 [2024-07-15 20:52:42.641971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.233 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.642238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.642247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.642530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.642540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.642801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.642810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.642986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.642995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.643162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.643172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.643428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.643438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.643681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.643691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.643947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.643957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.644141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.644150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.644355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.644365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.644551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.644560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.644750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.644759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.644908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.644919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.645208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.645217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.645391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.645401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.645568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.645577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.645859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.645869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.646127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.646136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.646370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.646380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.646565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.646575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.646781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.646790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.647041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.647050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.647234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.647245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.647440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.647449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.647660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.647669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.647920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.647929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.648124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.648134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.648306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.648316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.648576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.648585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.648839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.648848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.649096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.649105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.649358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.649368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.649564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.649574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.649811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.649821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.650082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.234 [2024-07-15 20:52:42.650092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.234 qpair failed and we were unable to recover it. 00:27:08.234 [2024-07-15 20:52:42.650355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.650366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.650621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.650631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.650872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.650881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.651141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.651150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.651408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.651419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.651659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.651669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.651951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.651960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.652090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.652100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.652290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.652300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.652480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.652490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.652658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.652668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.652858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.652867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.653039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.653049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.653285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.653295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.653548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.653558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.653819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.653829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.654071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.654081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.654329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.654341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.654605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.654614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.654853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.654863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.655049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.655059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.655317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.655327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.655453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.655463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.655665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.655675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.655936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.655946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.656154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.656163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.656373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.656383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.656640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.656649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.656890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.656900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.657085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.657095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.657329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.657340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.657598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.657608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.657872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.657882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.658158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.658168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.658340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.658350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.658488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.658498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.658754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.658765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.658944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.658954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.659216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.659231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.659476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.659486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.659732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.235 [2024-07-15 20:52:42.659742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.235 qpair failed and we were unable to recover it. 00:27:08.235 [2024-07-15 20:52:42.659927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.659938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.660197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.660206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.660413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.660423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.660620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.660630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.660876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.660886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.661082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.661092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.661328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.661338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.661512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.661522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.661761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.661772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.662056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.662066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.236 [2024-07-15 20:52:42.662270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.236 [2024-07-15 20:52:42.662281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.236 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.662519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.662530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.662782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.662794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.663060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.663070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.663328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.663339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.663472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.663482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.663663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.663675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.663874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.663884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.664070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.664081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.664362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.664373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.664492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.664502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.664763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.664773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.664963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.664974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.665159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.665170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.665371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.665381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.665578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.665588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.665839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.665849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.666034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.666044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.666174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.666184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.666301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.666311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.666500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.666510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.666721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.666731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.666906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.666916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.667105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.667114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.667304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.667314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.667583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.667593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.667849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.667858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.667978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.667987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.668169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.668179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.519 [2024-07-15 20:52:42.668422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.519 [2024-07-15 20:52:42.668432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.519 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.668547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.668556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.668815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.668825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.668956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.668965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.669181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.669207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.669508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.669523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.669776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.669790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.670060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.670074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.670222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.670240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.670435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.670449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.670717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.670730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.670979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.670993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.671191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.671205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.671447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.671461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.671597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.671611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.671752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.671766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.671945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.671959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.672262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.672280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.672543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.672557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.672733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.672746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.673016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.673029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.673245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.673259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.673556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.673569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.673843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.673857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.674128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.674142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.674360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.674374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.674559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.674573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.674864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.674878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.675065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.675079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.675286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.675301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.675549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.675563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.675841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.675855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.675997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.676011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.676253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.676267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.676534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.676548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.676681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.676695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.676962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.676975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.677215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.677231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.677479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.677493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.677635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.677648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.677852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.677865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.520 qpair failed and we were unable to recover it. 00:27:08.520 [2024-07-15 20:52:42.678071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.520 [2024-07-15 20:52:42.678084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.678264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.678278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.678455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.678469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.678665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.678677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.678932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.678942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.679204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.679213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.679401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.679412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.679590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.679600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.679816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.679826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.680073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.680082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.680377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.680387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.680570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.680580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.680710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.680720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.680856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.680866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.681150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.681160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.681347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.681357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.681556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.681568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.681774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.681784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.682019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.682028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.682286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.682295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.682530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.682540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.682799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.682809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.682977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.682987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.683121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.683131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.683393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.683403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.683582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.683592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.683850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.683860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.684095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.684105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.684273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.684283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.684408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.684418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.684651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.684660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.684848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.684858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.685113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.685123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.685259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.685270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.685545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.685555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.685809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.685819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.686002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.686012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.686208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.686217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.686432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.686442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.686563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.686573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.686829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.521 [2024-07-15 20:52:42.686839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.521 qpair failed and we were unable to recover it. 00:27:08.521 [2024-07-15 20:52:42.687039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.687049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.687303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.687313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.687524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.687539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.687839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.687853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.688071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.688085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.688282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.688297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.688484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.688498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.688771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.688784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.688992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.689006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.689249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.689262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.689448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.689462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.689684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.689697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.689962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.689975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.690223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.690243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.690533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.690547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.690737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.690753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.690960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.690973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.691118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.691132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.691393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.691407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.691598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.691612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.691868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.691882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.692137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.692151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.692412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.692426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.692603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.692617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.692860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.692874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.693093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.693106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.693249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.693263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.693447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.693461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.693677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.693691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.693884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.693897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.694076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.694090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.694328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.694343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.694608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.694621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.694816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.694829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.695035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.695048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.695240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.695255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.695465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.695478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.695677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.695691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.695963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.695976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.696265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.696279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.522 [2024-07-15 20:52:42.696464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.522 [2024-07-15 20:52:42.696478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.522 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.696749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.696762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.696973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.696985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.697179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.697189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.697447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.697457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.697710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.697720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.697952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.697962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.698175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.698185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.698442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.698452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.698631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.698641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.698896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.698906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.699092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.699102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.699296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.699306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.699491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.699501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.699683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.699693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.699902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.699914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.700116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.700127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.700410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.700420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.700619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.700629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.700798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.700808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.700996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.701006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.701181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.701192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.701427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.701437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.701632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.701642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.701845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.701855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.702076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.702086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.702321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.702331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.702512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.702523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.702662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.702671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.702803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.702814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.703096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.703106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.703282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.703292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.703479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.523 [2024-07-15 20:52:42.703488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.523 qpair failed and we were unable to recover it. 00:27:08.523 [2024-07-15 20:52:42.703664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.703673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.703860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.703871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.704074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.704084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.704306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.704316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.704573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.704583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.704781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.704791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.704974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.704984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.705172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.705182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.705439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.705449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.705709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.705720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.705934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.705944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.706124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.706133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.706348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.706358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.706581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.706591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.706825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.706835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.707075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.707085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.707294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.707305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.707562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.707572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.707759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.707769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.707982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.707992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.708176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.708186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.708461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.708471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.708719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.708729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.708980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.708990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.709200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.709210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.709419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.709430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.709688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.709697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.709883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.709893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.710154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.710164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.710400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.710410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.710673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.710683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.710936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.710946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.711126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.711136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.711391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.711401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.711663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.711673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.711902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.711912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.712176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.712186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.712392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.712402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.712597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.712608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.712865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.712875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.524 [2024-07-15 20:52:42.713112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.524 [2024-07-15 20:52:42.713122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.524 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.713356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.713366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.713482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.713492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.713758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.713767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.714031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.714041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.714277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.714287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.714402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.714412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.714671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.714681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.714942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.714952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.715187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.715198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.715454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.715464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.715648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.715658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.715770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.715780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.715972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.715982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.716163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.716173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.716339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.716349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.716628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.716638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.716855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.716865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.717127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.717136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.717323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.717334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.717521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.717530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.717800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.717810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.717934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.717944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.718180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.718190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.718357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.718368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.718618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.718628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.718883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.718893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.719074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.719084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.719253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.719263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.719496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.719506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.719780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.719791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.720004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.720014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.720300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.720310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.720525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.720539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.720725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.720737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.720873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.720883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.721168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.721179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.721458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.721468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.721707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.721718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.721852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.721866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.722082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.722093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.525 qpair failed and we were unable to recover it. 00:27:08.525 [2024-07-15 20:52:42.722333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.525 [2024-07-15 20:52:42.722343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.722517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.722527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.722785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.722795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.723029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.723038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.723305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.723315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.723491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.723501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.723678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.723689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.723890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.723900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.724019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.724032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.724202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.724212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.724343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.724353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.724532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.724542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.724775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.724785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.725027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.725037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.725215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.725228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.725463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.725473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.725730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.725740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.726026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.726036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.726217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.726233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.726446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.726457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.726589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.726599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.726793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.726803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.726992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.727002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.727211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.727220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.727393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.727404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.727658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.727668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.727847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.727857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.728116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.728126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.728388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.728399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.728633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.728643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.728899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.728909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.729097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.729108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.729343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.729353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.729559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.729569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.729803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.729814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.729983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.729993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.730221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.730234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.730513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.730523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.730769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.730779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.731014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.731023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.731231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.526 [2024-07-15 20:52:42.731241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.526 qpair failed and we were unable to recover it. 00:27:08.526 [2024-07-15 20:52:42.731474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.731484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.731654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.731664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.731899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.731909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.732142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.732152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.732444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.732454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.732707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.732717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.732894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.732904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.733082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.733093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.733383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.733393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.733670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.733681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.733852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.733862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.734030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.734040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.734249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.734259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.734519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.734529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.734705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.734715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.734916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.734926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.735093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.735103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.735282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.735293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.735481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.735490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.735684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.735693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.735832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.735842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.736126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.736136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.736371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.736382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.736616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.736626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.736802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.736812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.737092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.737102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.737337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.737347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.737580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.737590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.737841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.737850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.738088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.738098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.738382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.738392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.738629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.738639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.738827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.738837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.739096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.739105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.739366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.739377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.527 [2024-07-15 20:52:42.739589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.527 [2024-07-15 20:52:42.739598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.527 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.739871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.739881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.740051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.740061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.740297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.740307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.740564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.740574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.740751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.740761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.740995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.741004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.741175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.741184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.741455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.741465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.741581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.741592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.741780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.741790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.741956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.741966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.742249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.742262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.742447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.742457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.742730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.742741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.743012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.743022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.743149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.743159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.743351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.743361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.743597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.743607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.743867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.743877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.744111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.744121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.744404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.744414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.744651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.744661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.744938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.744948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.745119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.745129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.745363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.745374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.745563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.745574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.745830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.745840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.746101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.746111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.746350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.746360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.746616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.746626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.746885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.746895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.747137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.747148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.747400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.747410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.747592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.747602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.747856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.747866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.748122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.748132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.748320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.748330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.748460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.748470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.748752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.748765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.748980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.528 [2024-07-15 20:52:42.748990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.528 qpair failed and we were unable to recover it. 00:27:08.528 [2024-07-15 20:52:42.749241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.749252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.749500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.749511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.749768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.749778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.749971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.749981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.750238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.750248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.750509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.750519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.750756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.750766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.751024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.751033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.751212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.751222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.751489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.751500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.751712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.751722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.752009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.752020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.752286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.752296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.752465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.752475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.752678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.752688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.752808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.752818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.753024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.753034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.753279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.753289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.753468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.753478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.753662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.753672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.753777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.753787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.754035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.754045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.754215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.754227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.754492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.754502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.754712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.754722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.754993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.755003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.755133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.755142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.755384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.755394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.755627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.755637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.755922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.755932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.756194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.756204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.756414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.756424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.756698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.756708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.756945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.756955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.757157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.757167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.757422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.757433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.757664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.757674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.757967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.757977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.758199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.758209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.758334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.758344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.758588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.529 [2024-07-15 20:52:42.758598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.529 qpair failed and we were unable to recover it. 00:27:08.529 [2024-07-15 20:52:42.758857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.758867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.759120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.759130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.759317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.759327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.759507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.759517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.759725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.759735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.759987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.759997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.760243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.760253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.760432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.760442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.760627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.760637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.760833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.760843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.761022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.761033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.761295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.761305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.761563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.761573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.761812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.761822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.762082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.762092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.762349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.762359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.762617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.762627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.762867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.762877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.763069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.763079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.763338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.763348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.763630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.763640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.763850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.763860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.764071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.764080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.764346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.764356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.764566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.764576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.764774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.764784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.765041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.765050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.765288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.765298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.765536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.765546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.765819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.765829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.766079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.766089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.766299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.766310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.766601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.766611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.766850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.766860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.767045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.767055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.767317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.767328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.767446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.767457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.767717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.767727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.767916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.767926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.768114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.768124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.768383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.530 [2024-07-15 20:52:42.768394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.530 qpair failed and we were unable to recover it. 00:27:08.530 [2024-07-15 20:52:42.768651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.768661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.768850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.768860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.769031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.769041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.769170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.769181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.769417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.769427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.769610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.769620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.769854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.769863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.770123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.770133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.770391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.770401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.770645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.770657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.770846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.770856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.771116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.771126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.771408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.771418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.771686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.771696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.771955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.771965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.772248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.772258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.772492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.772502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.772742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.772753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.773024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.773033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.773218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.773232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.773519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.773529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.773779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.773789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.774050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.774060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.774313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.774324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.774581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.774591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.774839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.774849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.775098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.775108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.775322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.775332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.775619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.775629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.775892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.775902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.776182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.776192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.776453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.776463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.776650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.776660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.776853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.776863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.777065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.777076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.777325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.777335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.777522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.777532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.777789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.777799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.778107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.778117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.778360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.778371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.531 [2024-07-15 20:52:42.778573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.531 [2024-07-15 20:52:42.778583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.531 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.778814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.778824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.779056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.779066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.779353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.779363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.779599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.779609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.779871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.779881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.780087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.780097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.780352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.780362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.780648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.780658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.780897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.780908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.781115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.781124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.781293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.781303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.781477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.781487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.781722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.781732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.781915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.781924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.782196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.782206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.782493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.782503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.782673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.782683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.782863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.782873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.782995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.783005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.783243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.783254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.783441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.783451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.783628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.783638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.783906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.783916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.784199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.784209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.784391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.784402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.784667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.784677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.784855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.784865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.785123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.785134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.785323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.785334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.785598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.785609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.785820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.785829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.786119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.786130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.786390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.786401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.786589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.786600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.786723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.532 [2024-07-15 20:52:42.786734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.532 qpair failed and we were unable to recover it. 00:27:08.532 [2024-07-15 20:52:42.787006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.787016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.787145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.787156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.787413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.787424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.787673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.787683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.787918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.787929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.788099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.788109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.788353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.788364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.788621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.788631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.788909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.788919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.789182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.789192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.789454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.789465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.789651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.789662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.789897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.789907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.790140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.790153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.790351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.790361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.790621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.790631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.790838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.790848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.791034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.791043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.791292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.791302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.791535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.791545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.791747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.791756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.791937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.791948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.792203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.792213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.792474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.792484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.792767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.792777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.793018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.793029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.793262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.793272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.793495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.793506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.793740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.793750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.793928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.793938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.794150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.794160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.794369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.794379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.794547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.794558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.794738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.794748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.794927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.794937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.795169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.795179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.795425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.795436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.795625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.795635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.795840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.795850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.796105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.796115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.796383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.533 [2024-07-15 20:52:42.796393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.533 qpair failed and we were unable to recover it. 00:27:08.533 [2024-07-15 20:52:42.796571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.796581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.796756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.796766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.797000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.797010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.797178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.797188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.797392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.797402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.797663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.797673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.797922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.797932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.798140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.798151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.798352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.798362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.798545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.798555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.798737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.798747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.799008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.799018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.799326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.799338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.799544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.799554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.799753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.799763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.799898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.799907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.800144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.800154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.800402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.800412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.800651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.800661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.800920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.800930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.801166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.801178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.801384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.801394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.801648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.801658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.801911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.801921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.802160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.802170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.802378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.802388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.802587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.802597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.802776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.802786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.802953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.802964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.803235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.803246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.803378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.803388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.803554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.803564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.803797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.803807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.804062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.804072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.804255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.804265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.804528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.804538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.804794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.804804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.805041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.805051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.805235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.805246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.805435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.805445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.805623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.534 [2024-07-15 20:52:42.805633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.534 qpair failed and we were unable to recover it. 00:27:08.534 [2024-07-15 20:52:42.805867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.805878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.806091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.806101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.806291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.806301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.806539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.806549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.806729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.806739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.806998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.807008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.807295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.807305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.807512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.807522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.807759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.807769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.808027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.808038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.808301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.808312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.808440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.808452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.808649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.808660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.808915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.808926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.809199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.809210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.809427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.809438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.809632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.809642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.809925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.809935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.810037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.810048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.810252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.810263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.810504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.810514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.810772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.810783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.811040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.811050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.811285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.811296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.811534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.811544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.811801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.811812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.812069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.812080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.812212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.812222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.812411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.812421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.812600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.812610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.812723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.812734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.812911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.812921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.813112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.813122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.813410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.813420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.813618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.813628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.813729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.813739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.813973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.813983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.814194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.814204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.814505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.814533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.814732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.814747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.814926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.535 [2024-07-15 20:52:42.814940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.535 qpair failed and we were unable to recover it. 00:27:08.535 [2024-07-15 20:52:42.815154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.815168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.815412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.815428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.815561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.815575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.815697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.815711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.815915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.815929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.816119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.816134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.816340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.816354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.816481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.816496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.816829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.816843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.816973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.816987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.817235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.817249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.817418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.817432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.817581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.817595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.817688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.817701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.817845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.817859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.818100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.818115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.818247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.818262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.818480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.818495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.818681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.818695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.818836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.818850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.819093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.819107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.819299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.819314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.819405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.819419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.819552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.819565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.819751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.819767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.819912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.819926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.820057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.820070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.820199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.820213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.820348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.820362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.820539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.820553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.820795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.820808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.821073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.821086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.821354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.821368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.821499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.821513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.821702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.821716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.536 qpair failed and we were unable to recover it. 00:27:08.536 [2024-07-15 20:52:42.821903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.536 [2024-07-15 20:52:42.821917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.822156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.822170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.822441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.822455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.822656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.822670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.822778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.822791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.823060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.823074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.823205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.823219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.823415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.823429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.823560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.823574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.823706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.823720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.823854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.823868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.824060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.824074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.824262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.824276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.824464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.824478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.824664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.824678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.824855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.824869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.824998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.825015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.825284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.825298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.825416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.825430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.825573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.825587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.825768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.825781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.825964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.825977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.826229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.826243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.826368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.826382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.826591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.826604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.826794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.826808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.826941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.826954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.827095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.827109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.827236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.827250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.827515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.827529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.827726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.827740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.827915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.827929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.828060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.828074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.828264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.828279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.828505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.828519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.828659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.828672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.828807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.828821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.829010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.829024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.829237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.829251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.829441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.829456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.829578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.537 [2024-07-15 20:52:42.829592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.537 qpair failed and we were unable to recover it. 00:27:08.537 [2024-07-15 20:52:42.829788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.829801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.829974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.829988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.830112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.830125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.830272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.830286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.830430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.830444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.830708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.830722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.830917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.830931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.831105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.831119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.831387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.831401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.831590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.831604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.831739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.831752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.831995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.832008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.832204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.832217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.832401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.832415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.832522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.832536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.832729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.832743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.832864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.832879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.833008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.833021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.833203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.833216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.833488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.833502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.833675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.833688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.833828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.833842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.834093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.834106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.834235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.834250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.834494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.834507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.834711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.834725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.834861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.834874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.835048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.835063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.835253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.835267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.835400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.835414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.835658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.835672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.835866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.835879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.836152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.836165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.836417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.836431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.836639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.836652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.836920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.836933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.837111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.837124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.837393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.837407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.837600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.837613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.837808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.837821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.838075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.538 [2024-07-15 20:52:42.838088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.538 qpair failed and we were unable to recover it. 00:27:08.538 [2024-07-15 20:52:42.838264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.838278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.838386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.838400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.838591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.838607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.838848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.838862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.839104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.839118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.839312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.839327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.839524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.839537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.839669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.839683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.839946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.839959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.840096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.840110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.840321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.840335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.840474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.840488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.840684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.840698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.840893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.840907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.841100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.841113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.841305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.841318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.841512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.841526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.841766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.841780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.841957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.841970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.842215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.842234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.842427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.842441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.842621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.842634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.842778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.842791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.842977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.842991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.843172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.843185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.843407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.843421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.843690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.843704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.843899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.843912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.844024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.844037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.844212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.844232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.844476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.844490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.844617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.844630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.844751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.844765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.844941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.844954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.845220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.845238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.845427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.845441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.845621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.845634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.845828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.845842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.845953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.845967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.846149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.846163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.846285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.846299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.539 [2024-07-15 20:52:42.846489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.539 [2024-07-15 20:52:42.846502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.539 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.846616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.846629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.846808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.846822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.846996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.847010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.847217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.847235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.847367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.847381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.847571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.847584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.847864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.847877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.848056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.848070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.848342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.848357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.848548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.848563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.848828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.848842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.848967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.848981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.849105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.849119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.849394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.849408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.849626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.849644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.849906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.849920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.850041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.850054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.850250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.850264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.850536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.850550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.850658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.850671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.850917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.850931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.851108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.851122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.851388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.851402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.851525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.851538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.851727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.851741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.851895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.851909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.852093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.852106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.852293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.852307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.852567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.852588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.852906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.852921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.853124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.853138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.853277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.853292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.853493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.853507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.853630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.853645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.853861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.853874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.540 [2024-07-15 20:52:42.854010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.540 [2024-07-15 20:52:42.854023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.540 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.854156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.854169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.854266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.854282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.854413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.854427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.854614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.854628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.854822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.854836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.855024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.855041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.855242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.855256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.855376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.855390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.855581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.855595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.855839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.855853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.855963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.855976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.856110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.856125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.856305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.856320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.856580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.856594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.856794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.856807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.857049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.857062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.857278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.857293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.857518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.857531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.857776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.857789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.857928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.857942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.858156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.858170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.858308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.858322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.858500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.858514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.858783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.858796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.858908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.858922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.859099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.859113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.859319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.859334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.859614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.859628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.859904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.859917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.860064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.860078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.860345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.860360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.860549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.860562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.860751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.860768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.860962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.860976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.861171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.861184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.861327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.861341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.861539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.861553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.861743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.861757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.861901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.861915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.862123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.541 [2024-07-15 20:52:42.862136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.541 qpair failed and we were unable to recover it. 00:27:08.541 [2024-07-15 20:52:42.862327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.862341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.862453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.862466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.862593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.862606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.862823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.862837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.863188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.863206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.863438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.863454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.863569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.863583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.863790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.863803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.863980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.863994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.864249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.864264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.864396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.864410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.864652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.864666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.864860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.864873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.865085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.865099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.865316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.865330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.865469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.865482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.865627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.865641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.865751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.865765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.865964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.865978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.866233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.866250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.866453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.866467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.866651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.866665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.866799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.866813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.867010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.867024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.867292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.867307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.867433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.867446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.867640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.867654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.867802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.867816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.867937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.867949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.868165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.868178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.868425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.868439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.868637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.868650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.868856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.868869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.868999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.869014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.869238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.869252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.869395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.869408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.869573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.869588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.869763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.869776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.869965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.869978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.870111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.870125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.870306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.870320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.542 qpair failed and we were unable to recover it. 00:27:08.542 [2024-07-15 20:52:42.870590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.542 [2024-07-15 20:52:42.870604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.870846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.870859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.870941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.870955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.871151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.871165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.871445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.871459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.871570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.871586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.871720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.871734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.871919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.871933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.872125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.872138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.872402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.872417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.872550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.872565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.872752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.872765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.873008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.873023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.873155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.873169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.873339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.873354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.873495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.873509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.873728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.873741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.873954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.873968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.874173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.874187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.874332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.874348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.874445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.874458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.874637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.874651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.874790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.874805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.875009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.875023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.875245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.875258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.875341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.875354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.875576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.875590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.875708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.875720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.875958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.875971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.876090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.876103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.876299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.876313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.876584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.876597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.876774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.876790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.876985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.877001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.877201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.877215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.877425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.877440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.877690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.877703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.877838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.877851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.878053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.878067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.878277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.878293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.878419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.878434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.543 qpair failed and we were unable to recover it. 00:27:08.543 [2024-07-15 20:52:42.878710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.543 [2024-07-15 20:52:42.878724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.878865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.878880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.879076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.879089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.879288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.879302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.879432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.879445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.879581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.879595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.879843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.879858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.879989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.880002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.880119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.880132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.880274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.880288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.880403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.880416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.880595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.880608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.880729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.880743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.880887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.880901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.881080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.881094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.881288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.881303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.881431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.881445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.881575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.881588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.881714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.881727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.881977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.881990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.882180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.882194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.882390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.882405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.882602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.882616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.882733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.882746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.882883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.882896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.883095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.883109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.883245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.883258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.883428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.883441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.883641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.883655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.883788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.883801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.883994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.884007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.884229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.884243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.884431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.884444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.884577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.884588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.884697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.884707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.884892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.884903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.885021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.885031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.544 qpair failed and we were unable to recover it. 00:27:08.544 [2024-07-15 20:52:42.885151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.544 [2024-07-15 20:52:42.885161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.885326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.885336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.885469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.885479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.885599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.885610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.885784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.885794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.885912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.885923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.886093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.886103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.886290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.886300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.886482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.886494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.886599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.886609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.886841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.886850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.887031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.887041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.887150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.887159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.887282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.887292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.887458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.887468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.887704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.887715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.887801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.887811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.887922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.887932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.888040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.888050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.888242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.888252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.888447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.888457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.888661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.888671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.888880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.888890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.888989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.888999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.889120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.889130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.889250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.889260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.889371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.889381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.889570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.889580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.889746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.889756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.889879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.889888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.890012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.890021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.890190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.890199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.890438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.890449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.890626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.890635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.890741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.890750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.890863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.890873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.891044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.891054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.891311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.891321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.891427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.891437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.891608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.545 [2024-07-15 20:52:42.891618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.545 qpair failed and we were unable to recover it. 00:27:08.545 [2024-07-15 20:52:42.891736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.891747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.891926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.891937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.892143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.892153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.892275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.892286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.892392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.892402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.892508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.892517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.892706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.892716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.892979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.892990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.893177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.893189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.893369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.893379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.893572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.893582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.893773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.893782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.893966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.893976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.894124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.894134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.894369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.894380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.894552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.894563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.894699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.894711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.894825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.894836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.895037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.895048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.895232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.895242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.895382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.895393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.895581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.895592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.895775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.895786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.895975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.895985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.896159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.896170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.896375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.896386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.896487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.896497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.896778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.896789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.896961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.896972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.897095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.897106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.897322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.897333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.897455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.897465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.897575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.897586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.897717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.897727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.897848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.897858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.898075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.898087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.898262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.898272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.898457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.898468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.898648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.898658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.546 [2024-07-15 20:52:42.898835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.546 [2024-07-15 20:52:42.898845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.546 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.899058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.899069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.899176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.899185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.899363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.899373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.899492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.899502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.899674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.899684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.899911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.899921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.900038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.900048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.900171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.900181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.900357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.900368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.900606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.900616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.900880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.900890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.901071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.901081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.901206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.901216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.901399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.901418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.901602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.901616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.901807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.901821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.902020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.902035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.902303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.902319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.902593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.902607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.902790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.902804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.903007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.903021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.903306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.903320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.903455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.903469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.903666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.903680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.903859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.903873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.904003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.904016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.904261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.904276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.904419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.904433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.904615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.904628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.904775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.904788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.904984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.904998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.905142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.905156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.905290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.905305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.905498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.905512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.905693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.905708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.905925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.905941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.906132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.906146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.906274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.906288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.906541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.906555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.906684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.906697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.547 qpair failed and we were unable to recover it. 00:27:08.547 [2024-07-15 20:52:42.906821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.547 [2024-07-15 20:52:42.906835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.907024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.907037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.907328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.907342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.907457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.907470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.907695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.907709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.907839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.907853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.908097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.908111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.908219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.908237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.908428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.908442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.908628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.908642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.908767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.908780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.908959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.908972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.909170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.909185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.909400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.909414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.909560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.909573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.909832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.909846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.910052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.910066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.910263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.910277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.910471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.910485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.910672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.910686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.910818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.910832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.910943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.910956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.911150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.911164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.911280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.911295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.911479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.911494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.911621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.911635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.911812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.911826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.912006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.912020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.912131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.912145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.912315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.912327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.912516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.912525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.912643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.912652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.912756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.912766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.912942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.912953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.913063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.913073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.913309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.913322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.913515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.913525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.913720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.913730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.913863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.913873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.914126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.914136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.914321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.914333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.548 qpair failed and we were unable to recover it. 00:27:08.548 [2024-07-15 20:52:42.914518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-07-15 20:52:42.914529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.914642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.914652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.914832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.914843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.915078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.915089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.915292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.915302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.915472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.915487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.915734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.915749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.915997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.916008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.916114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.916124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.916310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.916321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.916509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.916519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.916706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.916717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.916884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.916895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.917066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.917076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.917259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.917269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.917381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.917391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.917529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.917540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.917709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.917719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.917888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.917899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.918014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.918024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.918139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.918149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.918283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.918295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.918544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.918554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.918740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.918751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.918873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.918883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.919001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.919012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.919260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.919270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.919444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.919455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.919633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.919643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.919828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.919838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.920029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.920040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.920216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.920233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.920415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.920425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.920533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.920544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.920746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.920759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.549 [2024-07-15 20:52:42.921003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-07-15 20:52:42.921015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.549 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.921185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.921195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.921300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.921310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.921493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.921504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.921626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.921636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.921738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.921749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.921952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.921962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.922078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.922088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.922259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.922269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.922455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.922465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.922706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.922717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.922886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.922896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.923079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.923089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.923207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.923218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.923334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.923345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.923614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.923624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.923813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.923824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.923904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.923914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.924170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.924180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.924285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.924295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.924434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.924444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.924640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.924650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.924823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.924834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.925041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.925052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.925312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.925322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.925436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.925446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.925575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.925586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.925768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.925778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.926016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.926026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.926206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.926217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.926388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.926399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.926567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.926578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.926709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.926720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.926902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.926912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.927175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.927185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.927386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.927397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.927630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.927641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.927899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.927909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.928170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.928180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.928428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.928441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.928623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.928633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.550 qpair failed and we were unable to recover it. 00:27:08.550 [2024-07-15 20:52:42.928838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-07-15 20:52:42.928848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.929029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.929040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.929212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.929222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.929463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.929474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.929640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.929650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.929785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.929795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.929983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.929994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.930183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.930193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.930360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.930370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.930563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.930574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.930812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.930821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.931021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.931031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.931210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.931220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.931413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.931423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.931532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.931543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.931672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.931683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.931853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.931865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.932133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.932145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.932330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.932342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.932576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.932588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.932832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.932844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.933122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.933134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.933315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.933327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.933455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.933467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.933584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.933596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.933782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.933794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.933975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.933987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.934170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.934182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.934421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.934434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.934600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.934612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.934844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.934856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.935064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.935077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.935207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.935219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.935336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.935349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.935475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.935487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.935670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.935683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.935856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.935868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.936056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.936069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.936197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.936213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.936358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.936385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.936700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.551 [2024-07-15 20:52:42.936716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.551 qpair failed and we were unable to recover it. 00:27:08.551 [2024-07-15 20:52:42.936934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.936949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.937136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.937151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.937348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.937364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.937638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.937654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.937787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.937804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.937920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.937935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.938122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.938138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.938248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.938265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.938458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.938474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.938657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.938673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.938786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.938800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.939067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.939080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.939261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.939274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.939455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.939468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.939646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.939658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.939776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.939787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.939972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.939984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.940065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.940078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.940264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.940277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.940396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.940409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.940595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.940607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.940778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.940791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.940975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.940988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.941103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.941115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.941290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.941304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.941491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.941504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.941632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.941645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.941840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.941853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.942037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.942051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.942162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.942176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.942285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.942298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.942505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.942517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.942629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.942640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.942874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.942886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.943125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.943137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.943330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.943343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.943472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.943484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.943621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.943635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.943749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.943762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.943876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.943889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.552 qpair failed and we were unable to recover it. 00:27:08.552 [2024-07-15 20:52:42.944093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.552 [2024-07-15 20:52:42.944107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.944280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.944292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.944391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.944404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.944572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.944584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.944710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.944721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.944980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.944992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.945105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.945117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.945372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.945384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.945567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.945585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.945759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.945772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.945900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.945913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.946048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.946060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.946238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.946251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.946377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.946389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.946671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.946684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.946862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.946875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.947126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.947138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.947398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.947411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.947638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.947651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.947830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.947843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.948029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.948042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.948229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.948242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.948360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.948371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.948484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.948496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.948744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.948757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.948951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.948964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.949130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.949143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.949244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.949258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.949448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.949460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.949697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.949708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.949888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.949901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.950089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.950101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.950217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.950232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.950347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.950360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.950568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.950581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.553 qpair failed and we were unable to recover it. 00:27:08.553 [2024-07-15 20:52:42.950768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.553 [2024-07-15 20:52:42.950780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.951031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.951044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.951259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.951273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.951457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.951467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.951648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.951659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.951864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.951875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.952003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.952014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.952129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.952140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.952257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.952269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.952490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.952503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.952682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.952693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.952886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.952897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.953151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.953162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.953340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.953351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.953480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.953492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.953615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.953627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.953756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.953768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.954005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.954017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.954188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.954200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.954387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.954399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.954637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.954649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.954762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.954774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.954951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.954963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.955074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.955086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.955323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.955336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.955516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.955528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.955709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.955721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.955844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.955856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.956035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.956046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.956168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.956179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.956287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.956299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.956410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.956422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.956659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.956671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.956771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.956783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.956998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.957009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.957193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.957205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.957320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.957332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.957593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.957604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.957728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.957740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.957921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.957933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.958167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.554 [2024-07-15 20:52:42.958179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.554 qpair failed and we were unable to recover it. 00:27:08.554 [2024-07-15 20:52:42.958317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.958328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.958515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.958529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.958647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.958659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.958922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.958933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.959060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.959072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.959200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.959211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.959383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.959395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.959573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.959584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.959697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.959708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.959877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.959889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.960156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.960167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.960342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.960354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.960537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.960549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.960723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.960734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.960971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.960982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.961050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.961062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.961297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.961310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.961445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.961457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.961515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.961527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.961801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.961813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.961928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.961939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.962073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.962084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.962220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.962235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.962417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.962428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.962666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.962678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.962805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.962816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.963054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.963065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.963181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.963193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 A controller has encountered a failure and is being reset. 00:27:08.555 [2024-07-15 20:52:42.963437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.963458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.963657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.963672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.963862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.963878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.964073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.964088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.964335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.964352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.964540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.964556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.964754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.964770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.964958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.964974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.965239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.965255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.965499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.965515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.965760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.965775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.965907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.965923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.966168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.555 [2024-07-15 20:52:42.966183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.555 qpair failed and we were unable to recover it. 00:27:08.555 [2024-07-15 20:52:42.966456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.966475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.966745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.966761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.966904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.966919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.967139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.967154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.967371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.967387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.967649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.967664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.967873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.967888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.968132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.968146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.968351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.968366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.968558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.968573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.968841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.968856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.969100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.969115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.969308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.969324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.969538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.969554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.969741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.969757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.969961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.969977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.970246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.970261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.970533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.970549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.970759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.970774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.971020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.971035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.971273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.971289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.971468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.971484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.971742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.971758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.972003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.972018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.972265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.972281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.972415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.972430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.972649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.972664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.972853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.972868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.973073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.973089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.973357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.973373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.973556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.973572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.973819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.973834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.974084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.974099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.974377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.974393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.974673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.974688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.974883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.974898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.975092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.975108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.975298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.975314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.975491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.975506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.975785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.975800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.975978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.556 [2024-07-15 20:52:42.975993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.556 qpair failed and we were unable to recover it. 00:27:08.556 [2024-07-15 20:52:42.976207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.976232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.976365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.976383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.976537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.976551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.976692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.976703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.976883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.976896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.977156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.977167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.977426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.977438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.977676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.977689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.977874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.977887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.978125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.978138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.978260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.978272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.557 [2024-07-15 20:52:42.978514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.557 [2024-07-15 20:52:42.978526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.557 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.978733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.978746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.978878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.978895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.979074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.979087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.979362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.979374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.979549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.979561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.979841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.979853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.980055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.980067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.980253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.980265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.980475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.980487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.980733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.980746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.981028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.981041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.981227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.981240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.981531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.981546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.981807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.981820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.982076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.982088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.982380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.982393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.982526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.982539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.982676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.982688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.982926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.982939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.983190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.983204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.983387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.983400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.983535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.983547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.983741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.983754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.983934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.983946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.984201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.984213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.984472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.825 [2024-07-15 20:52:42.984486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 [2024-07-15 20:52:42.984743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.984755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:08.825 [2024-07-15 20:52:42.985015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.985029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.825 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:08.825 [2024-07-15 20:52:42.985263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.825 [2024-07-15 20:52:42.985276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.825 qpair failed and we were unable to recover it. 00:27:08.826 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.826 [2024-07-15 20:52:42.985534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.985546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 20:52:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.826 [2024-07-15 20:52:42.985716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.985729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.985931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.985943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.986201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.986215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.986456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.986467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.986717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.986729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.987021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.987031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.987287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.987297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.987501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.987511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.987717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.987728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.987962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.987972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.988235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.988246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.988426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.988437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.988675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.988685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.988976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.988989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.989205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.989216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.989456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.989468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.989722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.989732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.990018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.990028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.990283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.990293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.990480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.990490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.990723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.990733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.990960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.990970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.991250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.991260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.991509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.991520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.991682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.991692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.991952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.991962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.992146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.992158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.992265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.992276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.826 [2024-07-15 20:52:42.992402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.826 [2024-07-15 20:52:42.992412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.826 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.992602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.992612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.992804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.992814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.992928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.992938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.993141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.993153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.993279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.993290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.993569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.993580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.993751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.993762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.994047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.994060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.994366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.994377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.994571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.994581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.994791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.994801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.995060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.995071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.995355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.995365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.995602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.995613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.995797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.995807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.996003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.996014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.996277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.996289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.996392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.996402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.996569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.996582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.996872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.996883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.997129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.997140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.997368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.997379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.997564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.997575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.997809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.997819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.998017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.998027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.998265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.998277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.998536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.998547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.998784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.998794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.998983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.827 [2024-07-15 20:52:42.998994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.827 qpair failed and we were unable to recover it. 00:27:08.827 [2024-07-15 20:52:42.999170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:42.999180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:42.999398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:42.999409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:42.999617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:42.999628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:42.999835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:42.999845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.000037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.000047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.000295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.000305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.000447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.000457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.000647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.000657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.000786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.000796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.000981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.000992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.001205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.001215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.001484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.001494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.001620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.001631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.001758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.001769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.002051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.002062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.002275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.002286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.002483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.002493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.002681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.002692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.002928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.002943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.003222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.003237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.003358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.003369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.003548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.003559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.003798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.003809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.004048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.004059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.004188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.004198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.004374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.004385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.004557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.004568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.004775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.004786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.004916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.004926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.005161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.005171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.828 qpair failed and we were unable to recover it. 00:27:08.828 [2024-07-15 20:52:43.005448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.828 [2024-07-15 20:52:43.005458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.005636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.005646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.005822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.005834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.006109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.006119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.006296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.006307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.006428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.006437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.006606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.006615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.006853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.006863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.007057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.007066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.007301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.007312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.007440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.007451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.007653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.007663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.007961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.007971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.008205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.008216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.008474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.008496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.008707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.008722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.008865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.008879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.009081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.009096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.009307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.009323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.009462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.009477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.009676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.009691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.009925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.009939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.010180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.010194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.010412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.010430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.010684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.010699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.010901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.010917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.011116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.011127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.011330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.011340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.011554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.011567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.829 [2024-07-15 20:52:43.011756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.829 [2024-07-15 20:52:43.011767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.829 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.012060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.012071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.012197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.012208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.012417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.012427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.012639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.012649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.012837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.012848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.013124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.013134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.013329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.013340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.013526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.013536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.013795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.013806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.013935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.013945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.014195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.014205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.014457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.014467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.014577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.014588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.014766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.014776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.014982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.014992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.015176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.015186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.015356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.015366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.015545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.015556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.015731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.015741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.016027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.016039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.016223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.830 [2024-07-15 20:52:43.016244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.830 qpair failed and we were unable to recover it. 00:27:08.830 [2024-07-15 20:52:43.016408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.016419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.016689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.016699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.016821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.016831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.017022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.017034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.017254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.017273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.017413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.017427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.017687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.017701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.017908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.017922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.018184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.018198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.018468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.018483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.018621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.018636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.018788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.018802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.019007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.019020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.019199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.019214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.019486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.019501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.019696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.019709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.019950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.019964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.020232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.020246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.020412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.020426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.831 [2024-07-15 20:52:43.020616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.020632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.020775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.020789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.020926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:08.831 [2024-07-15 20:52:43.020941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.021188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.021206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.831 [2024-07-15 20:52:43.021360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.021375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.021522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.021537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.831 [2024-07-15 20:52:43.021729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.021744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.022001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.022014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.831 qpair failed and we were unable to recover it. 00:27:08.831 [2024-07-15 20:52:43.022217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.831 [2024-07-15 20:52:43.022236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.022374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.022388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.022673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.022688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.022902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.022916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.023104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.023117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.023360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.023374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.023574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.023589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.023715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.023729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.023946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.023960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.024206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.024219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.024410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.024424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.024573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.024587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.024782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.024796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.025001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.025015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.025237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.025252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.025463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.025484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.025616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.025630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.025827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.025840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.026050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.026063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.026188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.026201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.026397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.026411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.026552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.026566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.026694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.026707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.026957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.026971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.027109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.027122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.027369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.027383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.027639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.027652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.027802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.027815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.027945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.027959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.028223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.028241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.028380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.028393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.028535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.832 [2024-07-15 20:52:43.028549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.832 qpair failed and we were unable to recover it. 00:27:08.832 [2024-07-15 20:52:43.028695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.028709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.028839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.028853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.028973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.028987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.029167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.029180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.029364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.029378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.029667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.029680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.029858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.029872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.030137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.030151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.030340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.030354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.030494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.030508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.030691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.030705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.030917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.030930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.031198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.031213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.031409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.031424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.031618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.031632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.031765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.031779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.031977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.031991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.032259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.032274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.032517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.032531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.032656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.032670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.032868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.032882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.033143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.033156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.033342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.033357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.033548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.033564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.033763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.033777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.034043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.034057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.034251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.034266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.034510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.034524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.034705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.034718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.833 qpair failed and we were unable to recover it. 00:27:08.833 [2024-07-15 20:52:43.034850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.833 [2024-07-15 20:52:43.034865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.034998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.035012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.035203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.035218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.035392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.035407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.035663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.035677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.035828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.035843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.036035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.036050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.036243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.036259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.036509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.036524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.036777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.036792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.037060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.037075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.037343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.037359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.037473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.037488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.037758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.037773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.037968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.037984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.038234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.038251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.038386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.038400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.038592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.038606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.038806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.038820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.038998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.039012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.039303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.039318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.039565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.039579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.039850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.039864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.040108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.040122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.040351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.040366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.040562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.040575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 Malloc0 00:27:08.834 [2024-07-15 20:52:43.040821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.040836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.041110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.041124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-07-15 20:52:43.041382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.834 [2024-07-15 20:52:43.041396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.835 [2024-07-15 20:52:43.041615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.041630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:08.835 [2024-07-15 20:52:43.041898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.041912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.042058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.042071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.835 [2024-07-15 20:52:43.042266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.042281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.835 [2024-07-15 20:52:43.042527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.042541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.042805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.042819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.043093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.043106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.043418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.043432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.043653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.043666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.043858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.043872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.044060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.044073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.044288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.044302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.044496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.044510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.044638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.044652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.044828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.044842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.044976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.044990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.045176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.045190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4838000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.045439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.045477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4848000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.045688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.045709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9ed0 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.045980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.045998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.046268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.046279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.046400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.046410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.046626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.046636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.046814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.046824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.047064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.047073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.047342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.047352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.047477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.047487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.047734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.047744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-07-15 20:52:43.048005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.835 [2024-07-15 20:52:43.048015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.048265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.048275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.048334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.836 [2024-07-15 20:52:43.048508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.048519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.048714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.048726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.048939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.048950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.049141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.049152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.049284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.049297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.049484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.049495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.049731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.049743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.049967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.049979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.050191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.050204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.050441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.050453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.050645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.050657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.050785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.050796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.050981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.050994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.051231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.051245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.051451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.051463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.051699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.051710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.051897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.051909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.052075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.052087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.052351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.052363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.052495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.052507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.052697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.052709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.052973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.052984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.053242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.836 [2024-07-15 20:52:43.053254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-07-15 20:52:43.053536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.053548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.053721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.053732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.053945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.053957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.054168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.054180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.054419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.054430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.054687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.054699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.054956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.054968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.055149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.055161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.055375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.055388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.055622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.055634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.055819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.055831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.056010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.056022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.056137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.056149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.056274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.056298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.056538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.056549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.056793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.056806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.056938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.056950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.837 [2024-07-15 20:52:43.057217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.057232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:08.837 [2024-07-15 20:52:43.057415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.057429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.057715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.837 [2024-07-15 20:52:43.057727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.837 [2024-07-15 20:52:43.057994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.058007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.058270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.058281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.058421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.058433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.058635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.058646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.058861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.058872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.059090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.059102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.059290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.059301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.059441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.059453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.059662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.059673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.837 [2024-07-15 20:52:43.059878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.837 [2024-07-15 20:52:43.059890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.837 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.060176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.060187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.060470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.060482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.060738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.060749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.060990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.061002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.061263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.061275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.061525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.061537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.061775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.061787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.061917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.061928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.062098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.062110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.062372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.062384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.062567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.062580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.062868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.062879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.063093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.063104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.063357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.063370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.063624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.063636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.063769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.063781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.063966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.063978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.064236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.064249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.064533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.064544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.064754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.064766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.065025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.838 [2024-07-15 20:52:43.065037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.065279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.065291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:08.838 [2024-07-15 20:52:43.065546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.065558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.838 [2024-07-15 20:52:43.065757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.065769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.065960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.065972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.066144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.066156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.066437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.066449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.066735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.066747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.838 [2024-07-15 20:52:43.066951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.838 [2024-07-15 20:52:43.066963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.838 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.067198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.067209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.067427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.067439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.067566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.067578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.067821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.067833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.068111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.068123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.068301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.068314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.068504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.068515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.068780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.068792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.068924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.068937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.069217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.069232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.069351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.069363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.069550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.069562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.069740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.069753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.070014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.070027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.070306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.070318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.070564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.070575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.070820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.070832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.071067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.071078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.071345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.071356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.071607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.071619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.071875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.071887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.072130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.072145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.072395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.072407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.072659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.072670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.072841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.072852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.839 [2024-07-15 20:52:43.073089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.073101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.073337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.073350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.839 [2024-07-15 20:52:43.073552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.073564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.839 [2024-07-15 20:52:43.073825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.073837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.839 [2024-07-15 20:52:43.074039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.074052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.074232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.074244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.074482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.074493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.074680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.074693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.074952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.074964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.075151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.075163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.075346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.075358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.075545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.075557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.075791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.075803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.839 qpair failed and we were unable to recover it. 00:27:08.839 [2024-07-15 20:52:43.076081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.839 [2024-07-15 20:52:43.076093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.076334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-07-15 20:52:43.076346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.076567] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.840 [2024-07-15 20:52:43.076607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.840 [2024-07-15 20:52:43.076618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4840000b90 with addr=10.0.0.2, port=4420 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.078878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.079006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.079026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.079034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.079041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.079062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.840 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:08.840 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.840 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.840 [2024-07-15 20:52:43.088875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.088949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.088965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.088973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.088979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.088995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.840 20:52:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2846446 00:27:08.840 [2024-07-15 20:52:43.098808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.098878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.098894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.098902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.098908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.098924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.108818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.108891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.108907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.108914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.108920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.108935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.118864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.118981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.118999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.119006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.119013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.119028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.128894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.128965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.128983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.128990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.128997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.129012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.138927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.139040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.139056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.139064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.139070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.139086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.148909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.149015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.149031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.149038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.149045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.149062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.158984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.159057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.159072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.159080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.159086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.840 [2024-07-15 20:52:43.159101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.840 qpair failed and we were unable to recover it. 00:27:08.840 [2024-07-15 20:52:43.169009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.840 [2024-07-15 20:52:43.169077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.840 [2024-07-15 20:52:43.169096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.840 [2024-07-15 20:52:43.169104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.840 [2024-07-15 20:52:43.169113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.169130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.179010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.179076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.179091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.179099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.179106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.179121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.189047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.189123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.189139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.189147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.189153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.189168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.199091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.199166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.199183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.199192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.199198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.199214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.209138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.209217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.209237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.209245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.209251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.209267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.219184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.219311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.219329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.219336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.219342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.219358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.229143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.229216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.229235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.229243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.229250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.229265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.239164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.239241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.239258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.239265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.239271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.239286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.249227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.249339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.249360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.249367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.249374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.249391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.259274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.259338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.259354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.259361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.259372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.841 [2024-07-15 20:52:43.259388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.841 qpair failed and we were unable to recover it. 00:27:08.841 [2024-07-15 20:52:43.269267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.841 [2024-07-15 20:52:43.269338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.841 [2024-07-15 20:52:43.269354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.841 [2024-07-15 20:52:43.269361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.841 [2024-07-15 20:52:43.269367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.842 [2024-07-15 20:52:43.269382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-07-15 20:52:43.279283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.842 [2024-07-15 20:52:43.279364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.842 [2024-07-15 20:52:43.279379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.842 [2024-07-15 20:52:43.279387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.842 [2024-07-15 20:52:43.279393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.842 [2024-07-15 20:52:43.279408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-07-15 20:52:43.289344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.842 [2024-07-15 20:52:43.289419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.842 [2024-07-15 20:52:43.289434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.842 [2024-07-15 20:52:43.289441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.842 [2024-07-15 20:52:43.289447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:08.842 [2024-07-15 20:52:43.289462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.842 qpair failed and we were unable to recover it. 00:27:08.842 [2024-07-15 20:52:43.299381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.299449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.299466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.299474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.299480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.299497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.309399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.309470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.309487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.309496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.309502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.309518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.319499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.319576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.319592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.319599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.319605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.319621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.329435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.329500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.329516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.329524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.329531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.329547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.339534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.339612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.339628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.339635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.339641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.339656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.349497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.349618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.349635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.349646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.349653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.349669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.359539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.359606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.359621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.359629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.359635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.359649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.369612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.369687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.369703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.369710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.369716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.369731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.379577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.379669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.379684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.379691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.379697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.379713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.389664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.389740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.389756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.389764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.389770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.389785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.399594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.399664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.399679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.399686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.399692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.399708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.409685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.409755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.409770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.409778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.409784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.409798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.419724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.419823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.419841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.419848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.419855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.419870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.103 [2024-07-15 20:52:43.429690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.103 [2024-07-15 20:52:43.429815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.103 [2024-07-15 20:52:43.429831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.103 [2024-07-15 20:52:43.429838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.103 [2024-07-15 20:52:43.429845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.103 [2024-07-15 20:52:43.429860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.103 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.439824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.439932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.439952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.439959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.439966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.439982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.449795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.449866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.449881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.449888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.449895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.449909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.459815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.459880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.459895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.459902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.459909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.459925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.469801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.469873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.469888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.469895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.469902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.469917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.479826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.479898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.479912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.479920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.479926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.479944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.489865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.489928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.489944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.489951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.489957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.489973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.500007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.500068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.500085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.500093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.500100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.500115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.509988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.510062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.510078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.510086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.510092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.510106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.519993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.520065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.520080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.520088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.520094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.520109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.530047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.530124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.530142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.530149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.530156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.530170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.540003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.540069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.540085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.540092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.540099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.540114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.550034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.550100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.550115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.550122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.550129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.550144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.560057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.560128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.560143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.560150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.560156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.560171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.570156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.104 [2024-07-15 20:52:43.570218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.104 [2024-07-15 20:52:43.570239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.104 [2024-07-15 20:52:43.570247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.104 [2024-07-15 20:52:43.570254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.104 [2024-07-15 20:52:43.570272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.104 qpair failed and we were unable to recover it. 00:27:09.104 [2024-07-15 20:52:43.580122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.105 [2024-07-15 20:52:43.580181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.105 [2024-07-15 20:52:43.580199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.105 [2024-07-15 20:52:43.580206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.105 [2024-07-15 20:52:43.580213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.105 [2024-07-15 20:52:43.580232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.105 qpair failed and we were unable to recover it. 00:27:09.365 [2024-07-15 20:52:43.590166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.365 [2024-07-15 20:52:43.590254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.590271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.590279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.590286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.590301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.600240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.600307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.600322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.600329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.600335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.600350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.610254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.610365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.610381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.610388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.610396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.610411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.620241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.620306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.620322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.620329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.620335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.620350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.630279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.630347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.630363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.630370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.630376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.630392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.640279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.640347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.640363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.640370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.640376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.640392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.650322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.650402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.650417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.650424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.650430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.650445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.660402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.660468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.660483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.660491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.660500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.660514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.670371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.670441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.670456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.670463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.670470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.670485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.680401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.680474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.680489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.680496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.366 [2024-07-15 20:52:43.680502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.366 [2024-07-15 20:52:43.680517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-15 20:52:43.690496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.366 [2024-07-15 20:52:43.690583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.366 [2024-07-15 20:52:43.690597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.366 [2024-07-15 20:52:43.690605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.690611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.690625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.700459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.700531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.700546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.700553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.700559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.700573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.710493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.710560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.710575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.710582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.710588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.710603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.720547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.720616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.720630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.720637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.720644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.720658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.730675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.730786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.730802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.730809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.730815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.730830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.740575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.740638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.740652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.740659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.740665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.740680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.750712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.750782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.750797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.750807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.750813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.750827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.760694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.760772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.760787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.760793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.760799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.760813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.770750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.770821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.770836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.770843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.770849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.770863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.780765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.780867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.780881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.780888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.780894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.780909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.790791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.367 [2024-07-15 20:52:43.790858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.367 [2024-07-15 20:52:43.790872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.367 [2024-07-15 20:52:43.790879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.367 [2024-07-15 20:52:43.790885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.367 [2024-07-15 20:52:43.790899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-15 20:52:43.800838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.368 [2024-07-15 20:52:43.800940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.368 [2024-07-15 20:52:43.800955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.368 [2024-07-15 20:52:43.800962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.368 [2024-07-15 20:52:43.800968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.368 [2024-07-15 20:52:43.800982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-15 20:52:43.810843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.368 [2024-07-15 20:52:43.810904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.368 [2024-07-15 20:52:43.810919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.368 [2024-07-15 20:52:43.810926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.368 [2024-07-15 20:52:43.810933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.368 [2024-07-15 20:52:43.810947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-15 20:52:43.820857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.368 [2024-07-15 20:52:43.820914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.368 [2024-07-15 20:52:43.820929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.368 [2024-07-15 20:52:43.820936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.368 [2024-07-15 20:52:43.820942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.368 [2024-07-15 20:52:43.820956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-15 20:52:43.830907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.368 [2024-07-15 20:52:43.830974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.368 [2024-07-15 20:52:43.830989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.368 [2024-07-15 20:52:43.830995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.368 [2024-07-15 20:52:43.831001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.368 [2024-07-15 20:52:43.831015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-15 20:52:43.840988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.368 [2024-07-15 20:52:43.841106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.368 [2024-07-15 20:52:43.841125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.368 [2024-07-15 20:52:43.841136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.368 [2024-07-15 20:52:43.841143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.368 [2024-07-15 20:52:43.841158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.628 [2024-07-15 20:52:43.850951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.628 [2024-07-15 20:52:43.851013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.628 [2024-07-15 20:52:43.851029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.628 [2024-07-15 20:52:43.851035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.628 [2024-07-15 20:52:43.851041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.628 [2024-07-15 20:52:43.851055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-07-15 20:52:43.861025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.628 [2024-07-15 20:52:43.861092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.628 [2024-07-15 20:52:43.861108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.628 [2024-07-15 20:52:43.861114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.628 [2024-07-15 20:52:43.861120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.628 [2024-07-15 20:52:43.861135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-07-15 20:52:43.871042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.628 [2024-07-15 20:52:43.871118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.628 [2024-07-15 20:52:43.871133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.628 [2024-07-15 20:52:43.871140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.628 [2024-07-15 20:52:43.871146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.628 [2024-07-15 20:52:43.871160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-07-15 20:52:43.881114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.628 [2024-07-15 20:52:43.881223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.628 [2024-07-15 20:52:43.881242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.628 [2024-07-15 20:52:43.881248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.628 [2024-07-15 20:52:43.881254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.628 [2024-07-15 20:52:43.881269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-07-15 20:52:43.891093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.628 [2024-07-15 20:52:43.891177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.628 [2024-07-15 20:52:43.891192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.628 [2024-07-15 20:52:43.891199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.628 [2024-07-15 20:52:43.891205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.891219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.901098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.901163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.901178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.901184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.901190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.901204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.911133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.911201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.911215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.911222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.911233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.911248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.921173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.921239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.921254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.921260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.921266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.921281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.931213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.931281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.931300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.931306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.931312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.931326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.941248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.941315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.941330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.941337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.941342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.941357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.951242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.951304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.951320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.951327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.951332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.951346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.961279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.961366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.961381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.961387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.961393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.961408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.971320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.971384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.971400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.971406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.971412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.971430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.981351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.981419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.981434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.981441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.981446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.981461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:43.991393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:43.991461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:43.991475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:43.991482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:43.991488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:43.991502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:44.001412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:44.001483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:44.001498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:44.001504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:44.001510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:44.001524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:44.011469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:44.011533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:44.011547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:44.011554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:44.011560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:44.011574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:44.021473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:44.021533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:44.021552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:44.021558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:44.021564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:44.021578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:44.031489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.629 [2024-07-15 20:52:44.031559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.629 [2024-07-15 20:52:44.031573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.629 [2024-07-15 20:52:44.031580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.629 [2024-07-15 20:52:44.031586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.629 [2024-07-15 20:52:44.031600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-07-15 20:52:44.041533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.630 [2024-07-15 20:52:44.041609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.630 [2024-07-15 20:52:44.041623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.630 [2024-07-15 20:52:44.041629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.630 [2024-07-15 20:52:44.041635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.630 [2024-07-15 20:52:44.041650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-07-15 20:52:44.051547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.630 [2024-07-15 20:52:44.051616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.630 [2024-07-15 20:52:44.051630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.630 [2024-07-15 20:52:44.051637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.630 [2024-07-15 20:52:44.051643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.630 [2024-07-15 20:52:44.051657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-07-15 20:52:44.061612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.630 [2024-07-15 20:52:44.061678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.630 [2024-07-15 20:52:44.061693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.630 [2024-07-15 20:52:44.061699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.630 [2024-07-15 20:52:44.061708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.630 [2024-07-15 20:52:44.061722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-07-15 20:52:44.071542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.630 [2024-07-15 20:52:44.071611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.630 [2024-07-15 20:52:44.071625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.630 [2024-07-15 20:52:44.071632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.630 [2024-07-15 20:52:44.071637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.630 [2024-07-15 20:52:44.071652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-07-15 20:52:44.081640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.630 [2024-07-15 20:52:44.081706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.630 [2024-07-15 20:52:44.081720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.630 [2024-07-15 20:52:44.081727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.630 [2024-07-15 20:52:44.081733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.630 [2024-07-15 20:52:44.081747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-07-15 20:52:44.091669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.630 [2024-07-15 20:52:44.091731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.630 [2024-07-15 20:52:44.091745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.630 [2024-07-15 20:52:44.091752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.630 [2024-07-15 20:52:44.091758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.630 [2024-07-15 20:52:44.091772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-07-15 20:52:44.101703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.630 [2024-07-15 20:52:44.101767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.630 [2024-07-15 20:52:44.101781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.630 [2024-07-15 20:52:44.101787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.630 [2024-07-15 20:52:44.101793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.630 [2024-07-15 20:52:44.101807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.890 [2024-07-15 20:52:44.111730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.890 [2024-07-15 20:52:44.111804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.890 [2024-07-15 20:52:44.111820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.890 [2024-07-15 20:52:44.111827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.890 [2024-07-15 20:52:44.111833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.111848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.121752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.121824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.121841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.121849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.121856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.121872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.131786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.131856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.131871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.131879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.131885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.131900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.141822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.141892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.141909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.141916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.141922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.141936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.151835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.151908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.151923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.151934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.151940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.151956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.161881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.161950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.161965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.161972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.161979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.161994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.171915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.171986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.172003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.172011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.172018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.172033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.181930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.182001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.182016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.182023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.182029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.182044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.191950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.192017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.192032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.192039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.192046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.192060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.201980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.202092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.202110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.202117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.202124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.202139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.211925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.211988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.212003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.212010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.212016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.212030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.222016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.891 [2024-07-15 20:52:44.222086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.891 [2024-07-15 20:52:44.222102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.891 [2024-07-15 20:52:44.222109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.891 [2024-07-15 20:52:44.222117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.891 [2024-07-15 20:52:44.222131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.891 qpair failed and we were unable to recover it. 00:27:09.891 [2024-07-15 20:52:44.232061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.232126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.232141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.232149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.232156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.232172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.242025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.242091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.242107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.242117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.242123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.242138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.252123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.252192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.252207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.252214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.252221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.252239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.262066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.262129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.262145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.262152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.262158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.262174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.272169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.272250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.272268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.272275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.272283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.272299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.282189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.282306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.282325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.282332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.282339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.282355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.292239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.292308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.292325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.292332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.292340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.292355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.302219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.302319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.302335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.302342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.302349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.302365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.312220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.312296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.312312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.312319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.312325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.312339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.322327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.322404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.322419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.322427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.322433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.322448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.332358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.332438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.332456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.332463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.332469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.892 [2024-07-15 20:52:44.332485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.892 qpair failed and we were unable to recover it. 00:27:09.892 [2024-07-15 20:52:44.342369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.892 [2024-07-15 20:52:44.342449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.892 [2024-07-15 20:52:44.342465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.892 [2024-07-15 20:52:44.342473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.892 [2024-07-15 20:52:44.342479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.893 [2024-07-15 20:52:44.342494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.893 qpair failed and we were unable to recover it. 00:27:09.893 [2024-07-15 20:52:44.352406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.893 [2024-07-15 20:52:44.352473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.893 [2024-07-15 20:52:44.352487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.893 [2024-07-15 20:52:44.352494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.893 [2024-07-15 20:52:44.352501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.893 [2024-07-15 20:52:44.352516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.893 qpair failed and we were unable to recover it. 00:27:09.893 [2024-07-15 20:52:44.362440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.893 [2024-07-15 20:52:44.362508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.893 [2024-07-15 20:52:44.362523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.893 [2024-07-15 20:52:44.362530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.893 [2024-07-15 20:52:44.362536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:09.893 [2024-07-15 20:52:44.362551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.893 qpair failed and we were unable to recover it. 00:27:10.154 [2024-07-15 20:52:44.372504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.154 [2024-07-15 20:52:44.372569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.154 [2024-07-15 20:52:44.372589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.154 [2024-07-15 20:52:44.372597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.154 [2024-07-15 20:52:44.372604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.154 [2024-07-15 20:52:44.372624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.154 qpair failed and we were unable to recover it. 00:27:10.154 [2024-07-15 20:52:44.382494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.154 [2024-07-15 20:52:44.382573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.154 [2024-07-15 20:52:44.382589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.154 [2024-07-15 20:52:44.382596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.154 [2024-07-15 20:52:44.382603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.154 [2024-07-15 20:52:44.382618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.154 qpair failed and we were unable to recover it. 00:27:10.154 [2024-07-15 20:52:44.392466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.154 [2024-07-15 20:52:44.392549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.154 [2024-07-15 20:52:44.392565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.154 [2024-07-15 20:52:44.392573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.154 [2024-07-15 20:52:44.392579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.154 [2024-07-15 20:52:44.392594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.154 qpair failed and we were unable to recover it. 00:27:10.154 [2024-07-15 20:52:44.402529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.154 [2024-07-15 20:52:44.402596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.154 [2024-07-15 20:52:44.402611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.154 [2024-07-15 20:52:44.402618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.154 [2024-07-15 20:52:44.402624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.154 [2024-07-15 20:52:44.402639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.154 qpair failed and we were unable to recover it. 00:27:10.154 [2024-07-15 20:52:44.412579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.154 [2024-07-15 20:52:44.412648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.154 [2024-07-15 20:52:44.412663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.154 [2024-07-15 20:52:44.412671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.154 [2024-07-15 20:52:44.412677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.154 [2024-07-15 20:52:44.412692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.154 qpair failed and we were unable to recover it. 00:27:10.154 [2024-07-15 20:52:44.422605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.154 [2024-07-15 20:52:44.422685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.154 [2024-07-15 20:52:44.422703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.422710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.422716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.422731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.432639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.432707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.432723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.432730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.432736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.432751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.442666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.442735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.442750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.442757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.442763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.442778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.452696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.452809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.452825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.452833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.452839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.452856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.462735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.462803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.462818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.462824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.462833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.462848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.472739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.472806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.472820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.472828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.472835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.472849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.482763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.482842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.482857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.482864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.482871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.482885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.492818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.492890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.492905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.492912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.492919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.492934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.502781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.502856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.502871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.502878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.502884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.502898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.512866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.512936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.512953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.512960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.512967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.512982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.522877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.522945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.522960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.522967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.522973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.522988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.532943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.533020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.533036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.155 [2024-07-15 20:52:44.533044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.155 [2024-07-15 20:52:44.533050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.155 [2024-07-15 20:52:44.533064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.155 qpair failed and we were unable to recover it. 00:27:10.155 [2024-07-15 20:52:44.542955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.155 [2024-07-15 20:52:44.543020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.155 [2024-07-15 20:52:44.543037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.543044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.543051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.543067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.552976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.553055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.553070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.553078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.553087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.553102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.562945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.563011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.563027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.563034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.563040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.563055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.573055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.573121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.573137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.573144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.573151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.573165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.583084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.583158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.583174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.583181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.583188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.583202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.593090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.593157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.593173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.593180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.593186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.593200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.603137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.603204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.603220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.603231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.603237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.603252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.613154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.613222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.613240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.613247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.613254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.613268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.623179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.623244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.623261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.623269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.623276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.623292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.156 [2024-07-15 20:52:44.633201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.156 [2024-07-15 20:52:44.633271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.156 [2024-07-15 20:52:44.633287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.156 [2024-07-15 20:52:44.633295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.156 [2024-07-15 20:52:44.633301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.156 [2024-07-15 20:52:44.633316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.156 qpair failed and we were unable to recover it. 00:27:10.418 [2024-07-15 20:52:44.643241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.418 [2024-07-15 20:52:44.643324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.418 [2024-07-15 20:52:44.643341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.418 [2024-07-15 20:52:44.643355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.418 [2024-07-15 20:52:44.643362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.418 [2024-07-15 20:52:44.643378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.418 qpair failed and we were unable to recover it. 00:27:10.418 [2024-07-15 20:52:44.653285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.418 [2024-07-15 20:52:44.653364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.418 [2024-07-15 20:52:44.653379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.418 [2024-07-15 20:52:44.653386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.418 [2024-07-15 20:52:44.653393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.418 [2024-07-15 20:52:44.653407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.418 qpair failed and we were unable to recover it. 00:27:10.418 [2024-07-15 20:52:44.663305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.418 [2024-07-15 20:52:44.663375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.418 [2024-07-15 20:52:44.663391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.418 [2024-07-15 20:52:44.663398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.418 [2024-07-15 20:52:44.663404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.418 [2024-07-15 20:52:44.663419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.418 qpair failed and we were unable to recover it. 00:27:10.418 [2024-07-15 20:52:44.673324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.418 [2024-07-15 20:52:44.673406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.418 [2024-07-15 20:52:44.673422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.418 [2024-07-15 20:52:44.673429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.418 [2024-07-15 20:52:44.673435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.418 [2024-07-15 20:52:44.673450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.418 qpair failed and we were unable to recover it. 00:27:10.418 [2024-07-15 20:52:44.683362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.418 [2024-07-15 20:52:44.683437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.418 [2024-07-15 20:52:44.683453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.418 [2024-07-15 20:52:44.683460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.418 [2024-07-15 20:52:44.683466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.418 [2024-07-15 20:52:44.683481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.418 qpair failed and we were unable to recover it. 00:27:10.418 [2024-07-15 20:52:44.693408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.418 [2024-07-15 20:52:44.693476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.418 [2024-07-15 20:52:44.693491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.418 [2024-07-15 20:52:44.693498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.418 [2024-07-15 20:52:44.693505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.418 [2024-07-15 20:52:44.693519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.418 qpair failed and we were unable to recover it. 00:27:10.418 [2024-07-15 20:52:44.703415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.418 [2024-07-15 20:52:44.703523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.418 [2024-07-15 20:52:44.703540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.703547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.703554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.703570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.713439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.713507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.713523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.713530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.713536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.713551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.723474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.723547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.723562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.723569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.723575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.723590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.733491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.733557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.733576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.733583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.733589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.733605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.743537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.743614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.743629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.743636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.743642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.743657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.753578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.753656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.753672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.753679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.753685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.753700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.763597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.763676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.763692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.763699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.763705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.763720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.773609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.773680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.773697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.773705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.773711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.773731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.783637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.783715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.783732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.783741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.783749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.783764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.793717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.793790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.793807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.793814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.793821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.793835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.803707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.803808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.803823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.803831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.803837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.803853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.813732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.813818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.813834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.813841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.813847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.813862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.823746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.823818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.823836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.823844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.823850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.823865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.833729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.833797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.833812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.833819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.833826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.833840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.419 qpair failed and we were unable to recover it. 00:27:10.419 [2024-07-15 20:52:44.843740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.419 [2024-07-15 20:52:44.843809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.419 [2024-07-15 20:52:44.843824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.419 [2024-07-15 20:52:44.843832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.419 [2024-07-15 20:52:44.843838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.419 [2024-07-15 20:52:44.843853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.420 qpair failed and we were unable to recover it. 00:27:10.420 [2024-07-15 20:52:44.853846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.420 [2024-07-15 20:52:44.853916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.420 [2024-07-15 20:52:44.853933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.420 [2024-07-15 20:52:44.853940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.420 [2024-07-15 20:52:44.853946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.420 [2024-07-15 20:52:44.853962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.420 qpair failed and we were unable to recover it. 00:27:10.420 [2024-07-15 20:52:44.863870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.420 [2024-07-15 20:52:44.863951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.420 [2024-07-15 20:52:44.863966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.420 [2024-07-15 20:52:44.863973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.420 [2024-07-15 20:52:44.863982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.420 [2024-07-15 20:52:44.863997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.420 qpair failed and we were unable to recover it. 00:27:10.420 [2024-07-15 20:52:44.873916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.420 [2024-07-15 20:52:44.873985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.420 [2024-07-15 20:52:44.874000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.420 [2024-07-15 20:52:44.874007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.420 [2024-07-15 20:52:44.874014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.420 [2024-07-15 20:52:44.874028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.420 qpair failed and we were unable to recover it. 00:27:10.420 [2024-07-15 20:52:44.883929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.420 [2024-07-15 20:52:44.883997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.420 [2024-07-15 20:52:44.884012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.420 [2024-07-15 20:52:44.884020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.420 [2024-07-15 20:52:44.884026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.420 [2024-07-15 20:52:44.884041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.420 qpair failed and we were unable to recover it. 00:27:10.420 [2024-07-15 20:52:44.893919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.420 [2024-07-15 20:52:44.894005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.420 [2024-07-15 20:52:44.894019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.420 [2024-07-15 20:52:44.894027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.420 [2024-07-15 20:52:44.894033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.420 [2024-07-15 20:52:44.894048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.420 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.903979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.904049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.904067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.904075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.904082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.904097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.914012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.914084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.914101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.914112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.914119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.914135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.924010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.924076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.924091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.924098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.924105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.924119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.934040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.934110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.934126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.934134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.934141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.934156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.944016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.944087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.944103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.944111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.944117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.944132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.954180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.954255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.954271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.954279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.954288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.954304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.964124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.964194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.964208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.964217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.964228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.964243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.974179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.974308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.974326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.974333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.974339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.974356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.984154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.984267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.681 [2024-07-15 20:52:44.984282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.681 [2024-07-15 20:52:44.984290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.681 [2024-07-15 20:52:44.984296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.681 [2024-07-15 20:52:44.984311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.681 qpair failed and we were unable to recover it. 00:27:10.681 [2024-07-15 20:52:44.994175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.681 [2024-07-15 20:52:44.994249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:44.994264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:44.994272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:44.994278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:44.994293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.004244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.004308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.004324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.004331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.004337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.004352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.014305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.014376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.014391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.014399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.014405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.014419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.024276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.024339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.024356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.024363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.024370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.024385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.034332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.034400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.034415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.034422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.034429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.034443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.044369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.044441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.044456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.044467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.044473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.044488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.054353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.054422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.054437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.054444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.054451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.054465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.064389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.064455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.064470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.064477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.064484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.064499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.074386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.074456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.074474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.074481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.074488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.074503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.084421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.084493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.084508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.084516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.084522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.084537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.094445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.094512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.094528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.094535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.094542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.682 [2024-07-15 20:52:45.094556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.682 qpair failed and we were unable to recover it. 00:27:10.682 [2024-07-15 20:52:45.104541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.682 [2024-07-15 20:52:45.104610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.682 [2024-07-15 20:52:45.104625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.682 [2024-07-15 20:52:45.104632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.682 [2024-07-15 20:52:45.104638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.683 [2024-07-15 20:52:45.104653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.683 qpair failed and we were unable to recover it. 00:27:10.683 [2024-07-15 20:52:45.114509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.683 [2024-07-15 20:52:45.114577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.683 [2024-07-15 20:52:45.114593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.683 [2024-07-15 20:52:45.114600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.683 [2024-07-15 20:52:45.114606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.683 [2024-07-15 20:52:45.114621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.683 qpair failed and we were unable to recover it. 00:27:10.683 [2024-07-15 20:52:45.124605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.683 [2024-07-15 20:52:45.124679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.683 [2024-07-15 20:52:45.124694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.683 [2024-07-15 20:52:45.124701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.683 [2024-07-15 20:52:45.124708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.683 [2024-07-15 20:52:45.124722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.683 qpair failed and we were unable to recover it. 00:27:10.683 [2024-07-15 20:52:45.134561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.683 [2024-07-15 20:52:45.134631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.683 [2024-07-15 20:52:45.134649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.683 [2024-07-15 20:52:45.134657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.683 [2024-07-15 20:52:45.134663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.683 [2024-07-15 20:52:45.134677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.683 qpair failed and we were unable to recover it. 00:27:10.683 [2024-07-15 20:52:45.144657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.683 [2024-07-15 20:52:45.144726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.683 [2024-07-15 20:52:45.144743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.683 [2024-07-15 20:52:45.144750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.683 [2024-07-15 20:52:45.144758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.683 [2024-07-15 20:52:45.144773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.683 qpair failed and we were unable to recover it. 00:27:10.683 [2024-07-15 20:52:45.154675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.683 [2024-07-15 20:52:45.154744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.683 [2024-07-15 20:52:45.154759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.683 [2024-07-15 20:52:45.154766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.683 [2024-07-15 20:52:45.154772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.683 [2024-07-15 20:52:45.154787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.683 qpair failed and we were unable to recover it. 00:27:10.943 [2024-07-15 20:52:45.164710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.943 [2024-07-15 20:52:45.164782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.943 [2024-07-15 20:52:45.164799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.943 [2024-07-15 20:52:45.164808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.943 [2024-07-15 20:52:45.164814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.943 [2024-07-15 20:52:45.164831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.943 qpair failed and we were unable to recover it. 00:27:10.943 [2024-07-15 20:52:45.174747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.943 [2024-07-15 20:52:45.174813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.943 [2024-07-15 20:52:45.174830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.943 [2024-07-15 20:52:45.174838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.943 [2024-07-15 20:52:45.174844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.943 [2024-07-15 20:52:45.174863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.943 qpair failed and we were unable to recover it. 00:27:10.943 [2024-07-15 20:52:45.184818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.943 [2024-07-15 20:52:45.184884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.943 [2024-07-15 20:52:45.184899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.943 [2024-07-15 20:52:45.184907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.184913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.184928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.194795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.194863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.194877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.194885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.194892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.194907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.204825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.204895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.204911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.204919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.204925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.204941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.214877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.214939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.214955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.214962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.214968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.214982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.224899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.224968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.224987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.224994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.225001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.225015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.234913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.234983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.234998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.235004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.235011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.235025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.244954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.245030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.245046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.245053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.245060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.245075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.254978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.255047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.255063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.255070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.255076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.255091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.265034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.265100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.265114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.265122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.265128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.265146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.275024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.275092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.275107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.275114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.275121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.275136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.285059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.285129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.285144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.285152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.285158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.285173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.295102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.295172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.295188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.295194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.295201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.295216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.305108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.305169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.305184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.305191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.305197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.305213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.315143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.315216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.315236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.315244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.315250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.315265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.325274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.944 [2024-07-15 20:52:45.325352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.944 [2024-07-15 20:52:45.325367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.944 [2024-07-15 20:52:45.325375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.944 [2024-07-15 20:52:45.325382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.944 [2024-07-15 20:52:45.325397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.944 qpair failed and we were unable to recover it. 00:27:10.944 [2024-07-15 20:52:45.335263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.335330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.335346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.335353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.335360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.335374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:10.945 [2024-07-15 20:52:45.345276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.345346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.345364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.345371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.345378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.345394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:10.945 [2024-07-15 20:52:45.355295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.355368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.355386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.355393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.355404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.355420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:10.945 [2024-07-15 20:52:45.365302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.365373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.365390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.365398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.365404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.365420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:10.945 [2024-07-15 20:52:45.375334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.375406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.375423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.375430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.375437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.375452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:10.945 [2024-07-15 20:52:45.385360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.385421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.385436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.385443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.385449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.385464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:10.945 [2024-07-15 20:52:45.395327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.395398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.395414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.395421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.395427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.395442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:10.945 [2024-07-15 20:52:45.405454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.405530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.405546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.405553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.405559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.405574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:10.945 [2024-07-15 20:52:45.415446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.945 [2024-07-15 20:52:45.415515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.945 [2024-07-15 20:52:45.415530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.945 [2024-07-15 20:52:45.415537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.945 [2024-07-15 20:52:45.415543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:10.945 [2024-07-15 20:52:45.415558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.945 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.425489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.425564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.425582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.425592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.425600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.425616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.435489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.435557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.435573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.435581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.435587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.435602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.445524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.445595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.445611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.445622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.445628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.445643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.455520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.455580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.455596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.455603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.455610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.455624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.465580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.465653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.465669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.465676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.465683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.465698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.475600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.475669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.475685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.475692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.475698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.475714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.485636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.485756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.485772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.485780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.485786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.485803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.495671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.495741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.495756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.495763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.495769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.495783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.505701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.505815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.505832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.505839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.505846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.505862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.515720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.515791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.515806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.515814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.515820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.515835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.525754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.206 [2024-07-15 20:52:45.525826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.206 [2024-07-15 20:52:45.525841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.206 [2024-07-15 20:52:45.525849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.206 [2024-07-15 20:52:45.525855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.206 [2024-07-15 20:52:45.525870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.206 qpair failed and we were unable to recover it. 00:27:11.206 [2024-07-15 20:52:45.535789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.535859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.535873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.535883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.535890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.535905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.545791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.545858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.545873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.545880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.545887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.545902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.555840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.555909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.555924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.555931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.555937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.555952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.565870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.565951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.565966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.565973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.565980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.565994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.575938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.576008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.576024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.576030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.576037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.576052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.585867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.585928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.585943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.585950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.585956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.585971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.595968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.596036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.596052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.596059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.596065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.596080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.605960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.606028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.606044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.606051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.606057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.606072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.616018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.616084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.616101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.616109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.616116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.616131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.626053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.626121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.626140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.626147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.626153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.626168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.636064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.636172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.636196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.636203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.636210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.636229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.646105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.646179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.646194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.646202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.646209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.646223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.656133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.656257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.656273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.656280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.656287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.656303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.666116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.666183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.666198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.666205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.207 [2024-07-15 20:52:45.666211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.207 [2024-07-15 20:52:45.666234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.207 qpair failed and we were unable to recover it. 00:27:11.207 [2024-07-15 20:52:45.676174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.207 [2024-07-15 20:52:45.676248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.207 [2024-07-15 20:52:45.676263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.207 [2024-07-15 20:52:45.676270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.208 [2024-07-15 20:52:45.676277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.208 [2024-07-15 20:52:45.676292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.208 qpair failed and we were unable to recover it. 00:27:11.208 [2024-07-15 20:52:45.686219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.208 [2024-07-15 20:52:45.686292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.208 [2024-07-15 20:52:45.686309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.208 [2024-07-15 20:52:45.686316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.208 [2024-07-15 20:52:45.686323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.208 [2024-07-15 20:52:45.686338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.208 qpair failed and we were unable to recover it. 00:27:11.468 [2024-07-15 20:52:45.696229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.468 [2024-07-15 20:52:45.696298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.468 [2024-07-15 20:52:45.696316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.468 [2024-07-15 20:52:45.696323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.468 [2024-07-15 20:52:45.696331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.468 [2024-07-15 20:52:45.696347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.468 qpair failed and we were unable to recover it. 00:27:11.468 [2024-07-15 20:52:45.706295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.468 [2024-07-15 20:52:45.706362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.468 [2024-07-15 20:52:45.706379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.468 [2024-07-15 20:52:45.706386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.706393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.706409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.716292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.716357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.716376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.716384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.716391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.716406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.726321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.726387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.726403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.726410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.726416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.726431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.736389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.736460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.736475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.736481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.736487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.736503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.746386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.746497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.746513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.746521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.746527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.746542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.756402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.756474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.756489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.756496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.756506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.756521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.766445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.766521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.766536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.766543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.766549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.766564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.776411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.776481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.776497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.776504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.776510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.776525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.786515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.786582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.786597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.786604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.786610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.786624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.796467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.796540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.796555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.796562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.796569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.796584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.806522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.806632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.806648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.806655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.806662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.806677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.816580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.816649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.816667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.816674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.816680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.816697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.826633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.826704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.826720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.826727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.826733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.826748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.836624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.469 [2024-07-15 20:52:45.836703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.469 [2024-07-15 20:52:45.836719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.469 [2024-07-15 20:52:45.836727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.469 [2024-07-15 20:52:45.836733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.469 [2024-07-15 20:52:45.836749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.469 qpair failed and we were unable to recover it. 00:27:11.469 [2024-07-15 20:52:45.846680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.846751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.846766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.846776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.846782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.846797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.856708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.856774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.856789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.856796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.856802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.856816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.866744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.866811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.866826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.866834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.866840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.866854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.876751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.876815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.876829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.876837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.876843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.876857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.886710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.886788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.886803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.886810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.886817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.886832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.896804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.896872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.896888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.896895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.896901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.896915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.906846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.906923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.906937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.906944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.906951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.906965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.916858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.916925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.916940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.916947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.916953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.916968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.926882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.926948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.926963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.926970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.926976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.926991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.936843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.936911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.936926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.936936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.936942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.936957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.470 [2024-07-15 20:52:45.946965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.470 [2024-07-15 20:52:45.947057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.470 [2024-07-15 20:52:45.947074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.470 [2024-07-15 20:52:45.947081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.470 [2024-07-15 20:52:45.947088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.470 [2024-07-15 20:52:45.947103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.470 qpair failed and we were unable to recover it. 00:27:11.730 [2024-07-15 20:52:45.957014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.730 [2024-07-15 20:52:45.957094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.730 [2024-07-15 20:52:45.957111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.730 [2024-07-15 20:52:45.957118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.730 [2024-07-15 20:52:45.957124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.730 [2024-07-15 20:52:45.957140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.730 qpair failed and we were unable to recover it. 00:27:11.730 [2024-07-15 20:52:45.966995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.730 [2024-07-15 20:52:45.967059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.730 [2024-07-15 20:52:45.967074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.730 [2024-07-15 20:52:45.967081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.730 [2024-07-15 20:52:45.967087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:45.967101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:45.977115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:45.977176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:45.977191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:45.977198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:45.977204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:45.977219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:45.987082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:45.987154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:45.987170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:45.987177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:45.987183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:45.987199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:45.997095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:45.997163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:45.997178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:45.997186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:45.997192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:45.997206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:46.007139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:46.007211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:46.007233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:46.007241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:46.007248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:46.007264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:46.017155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:46.017222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:46.017241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:46.017249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:46.017256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:46.017271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:46.027185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:46.027304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:46.027324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:46.027331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:46.027337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:46.027353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:46.037198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:46.037279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:46.037296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:46.037303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:46.037309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:46.037325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:46.047236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:46.047307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:46.047323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:46.047330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:46.047336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:46.047352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:46.057260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:46.057328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:46.057343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:46.057349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:46.057355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:46.057371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:46.067294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:46.067364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.731 [2024-07-15 20:52:46.067379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.731 [2024-07-15 20:52:46.067386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.731 [2024-07-15 20:52:46.067392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.731 [2024-07-15 20:52:46.067414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.731 qpair failed and we were unable to recover it. 00:27:11.731 [2024-07-15 20:52:46.077309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.731 [2024-07-15 20:52:46.077387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.077403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.077410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.077416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.077431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.087350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.087430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.087446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.087453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.087459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.087474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.097376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.097445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.097460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.097467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.097473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.097487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.107327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.107393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.107410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.107418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.107425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.107440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.117428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.117497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.117517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.117525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.117532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.117546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.127472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.127537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.127553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.127560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.127566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.127581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.137481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.137548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.137563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.137570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.137576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.137590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.147517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.147586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.147601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.147608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.147614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.147629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.157537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.157607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.157621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.157628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.157637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.157652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.167561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.167633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.732 [2024-07-15 20:52:46.167648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.732 [2024-07-15 20:52:46.167656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.732 [2024-07-15 20:52:46.167662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.732 [2024-07-15 20:52:46.167676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.732 qpair failed and we were unable to recover it. 00:27:11.732 [2024-07-15 20:52:46.177627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.732 [2024-07-15 20:52:46.177693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.733 [2024-07-15 20:52:46.177710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.733 [2024-07-15 20:52:46.177718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.733 [2024-07-15 20:52:46.177725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.733 [2024-07-15 20:52:46.177740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.733 qpair failed and we were unable to recover it. 00:27:11.733 [2024-07-15 20:52:46.187621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.733 [2024-07-15 20:52:46.187688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.733 [2024-07-15 20:52:46.187704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.733 [2024-07-15 20:52:46.187711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.733 [2024-07-15 20:52:46.187717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.733 [2024-07-15 20:52:46.187732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.733 qpair failed and we were unable to recover it. 00:27:11.733 [2024-07-15 20:52:46.197605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.733 [2024-07-15 20:52:46.197696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.733 [2024-07-15 20:52:46.197711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.733 [2024-07-15 20:52:46.197719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.733 [2024-07-15 20:52:46.197726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.733 [2024-07-15 20:52:46.197741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.733 qpair failed and we were unable to recover it. 00:27:11.733 [2024-07-15 20:52:46.207632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.733 [2024-07-15 20:52:46.207722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.733 [2024-07-15 20:52:46.207739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.733 [2024-07-15 20:52:46.207746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.733 [2024-07-15 20:52:46.207752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.733 [2024-07-15 20:52:46.207767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.733 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.217692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.217764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.217782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.217790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.217797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.217814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.227735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.227808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.227824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.227831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.227837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.227853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.237784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.237850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.237865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.237873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.237880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.237894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.247752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.247854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.247869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.247877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.247888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.247904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.257770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.257845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.257860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.257868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.257874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.257889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.267896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.267998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.268015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.268022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.268028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.268044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.277893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.277979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.277994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.278001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.278007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.278022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.287899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.287969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.287984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.287991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.287997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.288012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.297904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.297969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.297984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.297991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.297997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.298013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.307916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.308013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.308031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.308038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.308045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.308060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.318015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.318097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.318112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.318119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.318126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.993 [2024-07-15 20:52:46.318141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.993 qpair failed and we were unable to recover it. 00:27:11.993 [2024-07-15 20:52:46.328049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.993 [2024-07-15 20:52:46.328128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.993 [2024-07-15 20:52:46.328143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.993 [2024-07-15 20:52:46.328151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.993 [2024-07-15 20:52:46.328157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.328172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.338065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.338141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.338157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.338167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.338173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.338188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.348019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.348087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.348102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.348109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.348115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.348130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.358073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.358138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.358153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.358160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.358166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.358181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.368131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.368204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.368221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.368243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.368250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.368267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.378148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.378233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.378248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.378255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.378261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.378277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.388209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.388278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.388294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.388301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.388307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.388322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.398187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.398259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.398275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.398282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.398289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.398304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.408249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.408366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.408383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.408391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.408397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.408413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.418245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.418312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.418329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.418337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.418343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.418358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.428298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.428378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.428398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.428407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.428413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.428429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.438286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.438358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.438376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.438383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.438390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.438405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.448366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.448439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.448456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.448463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.448470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.448485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.458353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.458421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.458437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.458445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.458451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.458466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:11.994 [2024-07-15 20:52:46.468421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.994 [2024-07-15 20:52:46.468487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.994 [2024-07-15 20:52:46.468503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.994 [2024-07-15 20:52:46.468511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.994 [2024-07-15 20:52:46.468517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:11.994 [2024-07-15 20:52:46.468535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.994 qpair failed and we were unable to recover it. 00:27:12.277 [2024-07-15 20:52:46.478480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.277 [2024-07-15 20:52:46.478548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.277 [2024-07-15 20:52:46.478566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.277 [2024-07-15 20:52:46.478573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.277 [2024-07-15 20:52:46.478580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.277 [2024-07-15 20:52:46.478595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.277 qpair failed and we were unable to recover it. 00:27:12.277 [2024-07-15 20:52:46.488446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.277 [2024-07-15 20:52:46.488530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.277 [2024-07-15 20:52:46.488546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.277 [2024-07-15 20:52:46.488553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.277 [2024-07-15 20:52:46.488559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.277 [2024-07-15 20:52:46.488574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.277 qpair failed and we were unable to recover it. 00:27:12.277 [2024-07-15 20:52:46.498571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.277 [2024-07-15 20:52:46.498638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.498654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.498661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.498667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.498682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.508487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.508552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.508568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.508575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.508581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.508596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.518601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.518673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.518693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.518700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.518706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.518721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.528648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.528731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.528746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.528753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.528759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.528774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.538681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.538771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.538787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.538794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.538800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.538816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.548610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.548678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.548693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.548700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.548706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.548721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.558643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.558732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.558747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.558754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.558764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.558779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.568697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.568763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.568779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.568786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.568793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.568807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.578749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.578818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.578833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.578841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.578847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.578861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.588727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.588822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.588837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.588844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.588851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.588866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.598863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.598932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.598947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.598954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.598960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.598974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.608824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.608896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.608911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.608918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.608924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.608938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.618883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.618947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.618962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.618969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.618975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.618990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.628892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.628954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.628971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.278 [2024-07-15 20:52:46.628979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.278 [2024-07-15 20:52:46.628986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.278 [2024-07-15 20:52:46.629001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.278 qpair failed and we were unable to recover it. 00:27:12.278 [2024-07-15 20:52:46.638930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.278 [2024-07-15 20:52:46.638997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.278 [2024-07-15 20:52:46.639012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.639019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.639025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.639040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.648962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.649026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.649040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.649048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.649057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.649072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.659020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.659086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.659102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.659110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.659116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.659130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.669034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.669098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.669113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.669121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.669127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.669142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.679041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.679112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.679127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.679137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.679143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.679158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.689104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.689175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.689190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.689198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.689204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.689218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.699112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.699230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.699246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.699253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.699260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.699277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.709182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.709250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.709266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.709272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.709278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.709293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.719154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.719228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.719243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.719250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.719257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.719271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.729199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.729271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.729286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.729293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.729299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.729314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.739290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.739379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.739394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.739404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.739410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.739426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.279 [2024-07-15 20:52:46.749184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.279 [2024-07-15 20:52:46.749252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.279 [2024-07-15 20:52:46.749269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.279 [2024-07-15 20:52:46.749276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.279 [2024-07-15 20:52:46.749283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.279 [2024-07-15 20:52:46.749299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.279 qpair failed and we were unable to recover it. 00:27:12.540 [2024-07-15 20:52:46.759303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.540 [2024-07-15 20:52:46.759374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.540 [2024-07-15 20:52:46.759390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.540 [2024-07-15 20:52:46.759399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.540 [2024-07-15 20:52:46.759407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.540 [2024-07-15 20:52:46.759423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.540 qpair failed and we were unable to recover it. 00:27:12.540 [2024-07-15 20:52:46.769290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.540 [2024-07-15 20:52:46.769356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.540 [2024-07-15 20:52:46.769372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.540 [2024-07-15 20:52:46.769380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.540 [2024-07-15 20:52:46.769386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.540 [2024-07-15 20:52:46.769402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.540 qpair failed and we were unable to recover it. 00:27:12.540 [2024-07-15 20:52:46.779332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.540 [2024-07-15 20:52:46.779406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.540 [2024-07-15 20:52:46.779423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.540 [2024-07-15 20:52:46.779430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.540 [2024-07-15 20:52:46.779436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.540 [2024-07-15 20:52:46.779451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.540 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.789350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.789416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.789432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.789440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.789446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.789461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.799384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.799450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.799465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.799473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.799480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.799495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.809420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.809494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.809509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.809516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.809523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.809538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.819458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.819524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.819539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.819547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.819553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.819568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.829439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.829513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.829536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.829543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.829549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.829564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.839482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.839548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.839563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.839570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.839575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.839590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.849520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.849586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.849603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.849610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.849616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.849632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.859565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.859649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.859664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.859672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.859677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.859693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.869600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.869669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.869684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.869691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.869697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.869715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.879598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.879666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.879681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.879688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.879694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.879709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.889565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.889636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.889651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.541 [2024-07-15 20:52:46.889659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.541 [2024-07-15 20:52:46.889665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.541 [2024-07-15 20:52:46.889680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.541 qpair failed and we were unable to recover it. 00:27:12.541 [2024-07-15 20:52:46.899689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.541 [2024-07-15 20:52:46.899758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.541 [2024-07-15 20:52:46.899773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.899780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.899786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.899800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.909700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.909800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.909815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.909822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.909829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.909845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.919719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.919788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.919807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.919813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.919819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.919835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.929742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.929805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.929820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.929827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.929834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.929849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.939715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.939786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.939801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.939809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.939815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.939829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.949746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.949814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.949831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.949839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.949846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.949860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.959849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.959920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.959935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.959942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.959948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.959966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.969945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.970020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.970035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.970042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.970049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.970064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.979892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.979953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.979967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.979974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.979980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.979995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.989908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:46.989972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:46.989988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:46.989996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:46.990002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:46.990017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:46.999944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:47.000013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:47.000028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:47.000035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:47.000042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:47.000057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:47.009986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:47.010061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:47.010076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:47.010084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:47.010091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:47.010106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.542 [2024-07-15 20:52:47.019996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.542 [2024-07-15 20:52:47.020064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.542 [2024-07-15 20:52:47.020081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.542 [2024-07-15 20:52:47.020088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.542 [2024-07-15 20:52:47.020095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.542 [2024-07-15 20:52:47.020109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.542 qpair failed and we were unable to recover it. 00:27:12.806 [2024-07-15 20:52:47.030047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.806 [2024-07-15 20:52:47.030116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.806 [2024-07-15 20:52:47.030132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.806 [2024-07-15 20:52:47.030140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.806 [2024-07-15 20:52:47.030147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.806 [2024-07-15 20:52:47.030163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.806 qpair failed and we were unable to recover it. 00:27:12.806 [2024-07-15 20:52:47.040104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.806 [2024-07-15 20:52:47.040172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.806 [2024-07-15 20:52:47.040188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.806 [2024-07-15 20:52:47.040195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.806 [2024-07-15 20:52:47.040201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.806 [2024-07-15 20:52:47.040216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.806 qpair failed and we were unable to recover it. 00:27:12.806 [2024-07-15 20:52:47.050102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.806 [2024-07-15 20:52:47.050179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.806 [2024-07-15 20:52:47.050194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.050201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.050211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.050230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.060128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.060202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.060217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.060228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.060234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.060249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.070156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.070228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.070244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.070251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.070258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.070272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.080168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.080237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.080253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.080260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.080266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.080281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.090227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.090293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.090308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.090316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.090322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.090337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.100234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.100297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.100312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.100319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.100326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.100341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.110271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.110338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.110353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.110360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.110366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.110381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.120285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.120355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.120370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.120377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.120383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.120398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.130332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.130403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.130419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.130426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.130433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.130448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.140351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.140418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.140434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.140444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.140451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.140466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.150381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.150490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.150507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.150514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.150520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.150536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.160384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.160449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.160464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.160471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.160478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.160492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.170434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.170505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.170520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.170527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.807 [2024-07-15 20:52:47.170533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.807 [2024-07-15 20:52:47.170548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.807 qpair failed and we were unable to recover it. 00:27:12.807 [2024-07-15 20:52:47.180450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.807 [2024-07-15 20:52:47.180519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.807 [2024-07-15 20:52:47.180535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.807 [2024-07-15 20:52:47.180543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.180550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.180564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.190503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.190582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.190599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.190606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.190613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.190628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.200520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.200589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.200605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.200612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.200618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.200632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.210576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.210645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.210661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.210668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.210674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.210689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.220631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.220697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.220713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.220719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.220726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.220741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.230602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.230666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.230682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.230692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.230698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.230713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.240627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.240695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.240710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.240718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.240724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.240739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.250651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.250720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.250736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.250743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.250749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.250763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.260692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.260803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.260819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.260827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.260833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.260848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.270722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.270792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.270807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.270814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.270821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.270836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:12.808 [2024-07-15 20:52:47.280739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.808 [2024-07-15 20:52:47.280817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.808 [2024-07-15 20:52:47.280833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.808 [2024-07-15 20:52:47.280841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.808 [2024-07-15 20:52:47.280847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:12.808 [2024-07-15 20:52:47.280863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.808 qpair failed and we were unable to recover it. 00:27:13.069 [2024-07-15 20:52:47.290771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.069 [2024-07-15 20:52:47.290849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.290866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.290875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.290881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.290897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.300813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.300887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.300904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.300911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.300917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.300932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.310797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.310866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.310882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.310890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.310896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.310911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.320865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.320932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.320952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.320960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.320967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.320982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.330902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.330973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.330989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.330996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.331002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.331018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.340923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.340990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.341005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.341012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.341018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.341033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.350969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.351037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.351052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.351060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.351066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.351081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.360989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.361062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.361077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.361084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.361091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.361109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.371014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.371124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.371141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.371148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.371154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.371172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.381037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.381110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.381125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.381133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.381139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.381154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.391079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.391150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.391165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.391173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.391179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.391194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.401089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.401156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.401171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.401178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.401185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.401199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.411180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.411252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.411270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.411278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.411284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.411299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.421161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.421233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.421249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.070 [2024-07-15 20:52:47.421256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.070 [2024-07-15 20:52:47.421262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.070 [2024-07-15 20:52:47.421277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.070 qpair failed and we were unable to recover it. 00:27:13.070 [2024-07-15 20:52:47.431147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.070 [2024-07-15 20:52:47.431215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.070 [2024-07-15 20:52:47.431233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.431241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.431247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.431262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.441210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.441285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.441300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.441308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.441314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.441329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.451241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.451321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.451336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.451343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.451352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.451367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.461272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.461343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.461360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.461367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.461374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.461392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.471284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.471356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.471375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.471383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.471389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.471405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.481313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.481389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.481405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.481413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.481419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.481434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.491362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.491432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.491448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.491456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.491464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.491479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.501398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.501472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.501488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.501496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.501503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.501518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.511415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.511528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.511545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.511553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.511560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.511576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.521374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.521442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.521458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.521466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.521473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.521488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.531461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.531527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.531543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.531551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.531557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.531572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.071 [2024-07-15 20:52:47.541502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.071 [2024-07-15 20:52:47.541569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.071 [2024-07-15 20:52:47.541586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.071 [2024-07-15 20:52:47.541597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.071 [2024-07-15 20:52:47.541603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.071 [2024-07-15 20:52:47.541618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.071 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 20:52:47.551549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.332 [2024-07-15 20:52:47.551615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.332 [2024-07-15 20:52:47.551631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.332 [2024-07-15 20:52:47.551639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.332 [2024-07-15 20:52:47.551645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.332 [2024-07-15 20:52:47.551661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 20:52:47.561552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.332 [2024-07-15 20:52:47.561625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.332 [2024-07-15 20:52:47.561641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.332 [2024-07-15 20:52:47.561648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.332 [2024-07-15 20:52:47.561655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.332 [2024-07-15 20:52:47.561669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 20:52:47.571564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.332 [2024-07-15 20:52:47.571633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.332 [2024-07-15 20:52:47.571649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.332 [2024-07-15 20:52:47.571656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.332 [2024-07-15 20:52:47.571662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.332 [2024-07-15 20:52:47.571677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.332 qpair failed and we were unable to recover it. 00:27:13.332 [2024-07-15 20:52:47.581612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.581683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.581700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.581709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.581715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.581731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.591633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.591699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.591714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.591721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.591728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.591743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.601662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.601731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.601747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.601755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.601760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.601776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.611694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.611757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.611772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.611779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.611785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.611800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.621739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.621811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.621827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.621834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.621840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.621855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.631691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.631759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.631774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.631784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.631790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.631805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.641775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.641847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.641861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.641869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.641875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.641890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.651742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.651811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.651827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.651834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.651840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.651855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.661831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.661904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.661919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.661926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.661932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.661947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.671854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.671920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.671935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.671942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.671949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.671964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.681918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.681989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.682004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.682011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.682018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.682032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.691963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.333 [2024-07-15 20:52:47.692032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.333 [2024-07-15 20:52:47.692047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.333 [2024-07-15 20:52:47.692055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.333 [2024-07-15 20:52:47.692061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.333 [2024-07-15 20:52:47.692076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.333 qpair failed and we were unable to recover it. 00:27:13.333 [2024-07-15 20:52:47.701960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.702030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.702045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.702052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.702059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.702074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.711979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.712047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.712062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.712069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.712075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.712090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.722025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.722091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.722109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.722117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.722122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.722137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.732022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.732091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.732107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.732114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.732120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.732135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.742056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.742124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.742140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.742147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.742154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.742168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.752098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.752211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.752231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.752239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.752245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.752261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.762117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.762187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.762203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.762210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.762216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.762240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.772156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.772228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.772244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.772251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.772257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.772273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.782209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.782283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.782299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.782307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.782313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.782327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.792180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.792248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.792264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.792271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.792277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.792292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.802223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.802295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.802310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.802317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.802323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.802339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.334 [2024-07-15 20:52:47.812269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.334 [2024-07-15 20:52:47.812339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.334 [2024-07-15 20:52:47.812358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.334 [2024-07-15 20:52:47.812365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.334 [2024-07-15 20:52:47.812371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.334 [2024-07-15 20:52:47.812387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.334 qpair failed and we were unable to recover it. 00:27:13.595 [2024-07-15 20:52:47.822287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.595 [2024-07-15 20:52:47.822404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.595 [2024-07-15 20:52:47.822421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.595 [2024-07-15 20:52:47.822429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.595 [2024-07-15 20:52:47.822436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.595 [2024-07-15 20:52:47.822453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.595 qpair failed and we were unable to recover it. 00:27:13.595 [2024-07-15 20:52:47.832311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.595 [2024-07-15 20:52:47.832377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.595 [2024-07-15 20:52:47.832393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.595 [2024-07-15 20:52:47.832400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.595 [2024-07-15 20:52:47.832406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.595 [2024-07-15 20:52:47.832421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.595 qpair failed and we were unable to recover it. 00:27:13.595 [2024-07-15 20:52:47.842344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.595 [2024-07-15 20:52:47.842467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.595 [2024-07-15 20:52:47.842484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.595 [2024-07-15 20:52:47.842491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.595 [2024-07-15 20:52:47.842500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.595 [2024-07-15 20:52:47.842515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.595 qpair failed and we were unable to recover it. 00:27:13.595 [2024-07-15 20:52:47.852383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.595 [2024-07-15 20:52:47.852452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.595 [2024-07-15 20:52:47.852468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.595 [2024-07-15 20:52:47.852476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.595 [2024-07-15 20:52:47.852487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.852503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.862338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.862410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.862424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.862431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.862438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.862454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.872368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.872436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.872451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.872458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.872465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.872480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.882468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.882547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.882562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.882569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.882576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.882590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.892416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.892486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.892501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.892509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.892514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.892529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.902453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.902527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.902542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.902549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.902555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.902570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.912468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.912536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.912552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.912558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.912565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.912580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.922570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.922638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.922653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.922662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.922668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.922683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.932686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.932761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.932776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.932784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.932790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.932804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.942567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.942631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.942646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.942653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.942663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.942678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.952689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.952761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.952777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.952784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.952791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.952806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.962658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.962727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.962742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.962749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.962756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.962770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.972648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.972719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.972735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.972742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.596 [2024-07-15 20:52:47.972749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.596 [2024-07-15 20:52:47.972764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.596 qpair failed and we were unable to recover it. 00:27:13.596 [2024-07-15 20:52:47.982661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.596 [2024-07-15 20:52:47.982729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.596 [2024-07-15 20:52:47.982745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.596 [2024-07-15 20:52:47.982752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:47.982758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:47.982773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:47.992771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:47.992839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:47.992854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:47.992861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:47.992867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:47.992883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:48.002829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:48.002901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:48.002916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:48.002923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:48.002930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:48.002944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:48.012738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:48.012808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:48.012824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:48.012831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:48.012837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:48.012852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:48.022781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:48.022847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:48.022862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:48.022869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:48.022876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:48.022892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:48.032861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:48.032927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:48.032942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:48.032952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:48.032958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:48.032973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:48.042879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:48.042944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:48.042958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:48.042965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:48.042972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:48.042986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:48.052940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:48.053035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:48.053052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:48.053059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:48.053066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:48.053081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:48.062951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:48.063015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:48.063030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:48.063037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:48.063044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:48.063058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.597 [2024-07-15 20:52:48.072992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.597 [2024-07-15 20:52:48.073072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.597 [2024-07-15 20:52:48.073090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.597 [2024-07-15 20:52:48.073097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.597 [2024-07-15 20:52:48.073104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.597 [2024-07-15 20:52:48.073119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.597 qpair failed and we were unable to recover it. 00:27:13.858 [2024-07-15 20:52:48.083013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.858 [2024-07-15 20:52:48.083083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.858 [2024-07-15 20:52:48.083100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.858 [2024-07-15 20:52:48.083111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.858 [2024-07-15 20:52:48.083120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.858 [2024-07-15 20:52:48.083136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-07-15 20:52:48.093044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.858 [2024-07-15 20:52:48.093124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.858 [2024-07-15 20:52:48.093139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.858 [2024-07-15 20:52:48.093146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.858 [2024-07-15 20:52:48.093153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.858 [2024-07-15 20:52:48.093167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-07-15 20:52:48.103075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.858 [2024-07-15 20:52:48.103142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.858 [2024-07-15 20:52:48.103157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.858 [2024-07-15 20:52:48.103164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.858 [2024-07-15 20:52:48.103170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.858 [2024-07-15 20:52:48.103186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-07-15 20:52:48.113111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.858 [2024-07-15 20:52:48.113176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.858 [2024-07-15 20:52:48.113192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.858 [2024-07-15 20:52:48.113199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.858 [2024-07-15 20:52:48.113205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.858 [2024-07-15 20:52:48.113220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-07-15 20:52:48.123125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.858 [2024-07-15 20:52:48.123194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.858 [2024-07-15 20:52:48.123213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.858 [2024-07-15 20:52:48.123220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.858 [2024-07-15 20:52:48.123230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.858 [2024-07-15 20:52:48.123246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-07-15 20:52:48.133149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.858 [2024-07-15 20:52:48.133219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.858 [2024-07-15 20:52:48.133239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.858 [2024-07-15 20:52:48.133247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.858 [2024-07-15 20:52:48.133254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.858 [2024-07-15 20:52:48.133269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-07-15 20:52:48.143156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.858 [2024-07-15 20:52:48.143232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.858 [2024-07-15 20:52:48.143248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.858 [2024-07-15 20:52:48.143255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.143261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.143276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.153217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.153325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.153342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.153349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.153356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.153371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.163231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.163296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.163311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.163319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.163325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.163343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.173269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.173340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.173356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.173364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.173370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.173385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.183297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.183372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.183387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.183394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.183401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.183415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.193301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.193368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.193383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.193390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.193397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.193412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.203389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.203452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.203467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.203474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.203480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.203495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.213398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.213470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.213488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.213495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.213502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.213516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.223419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.223487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.223502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.223508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.223515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.223529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.233390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.233456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.233472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.233479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.233485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.233500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.243461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.243529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.243544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.243551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.243557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.243571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.253487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.253555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.253570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.253577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.253586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.253602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.263543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.263606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.263621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.263628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.263635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.263649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.273562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.273626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.273641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.273649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.273655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.273670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-07-15 20:52:48.283580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.859 [2024-07-15 20:52:48.283648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.859 [2024-07-15 20:52:48.283663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.859 [2024-07-15 20:52:48.283671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.859 [2024-07-15 20:52:48.283678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.859 [2024-07-15 20:52:48.283692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-07-15 20:52:48.293594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.860 [2024-07-15 20:52:48.293662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.860 [2024-07-15 20:52:48.293677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.860 [2024-07-15 20:52:48.293684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.860 [2024-07-15 20:52:48.293691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.860 [2024-07-15 20:52:48.293706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-07-15 20:52:48.303638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.860 [2024-07-15 20:52:48.303709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.860 [2024-07-15 20:52:48.303725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.860 [2024-07-15 20:52:48.303733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.860 [2024-07-15 20:52:48.303739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.860 [2024-07-15 20:52:48.303754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-07-15 20:52:48.313667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.860 [2024-07-15 20:52:48.313739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.860 [2024-07-15 20:52:48.313754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.860 [2024-07-15 20:52:48.313762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.860 [2024-07-15 20:52:48.313768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.860 [2024-07-15 20:52:48.313782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-07-15 20:52:48.323677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.860 [2024-07-15 20:52:48.323743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.860 [2024-07-15 20:52:48.323758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.860 [2024-07-15 20:52:48.323765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.860 [2024-07-15 20:52:48.323771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.860 [2024-07-15 20:52:48.323786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-07-15 20:52:48.333728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.860 [2024-07-15 20:52:48.333795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.860 [2024-07-15 20:52:48.333810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.860 [2024-07-15 20:52:48.333817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.860 [2024-07-15 20:52:48.333823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:13.860 [2024-07-15 20:52:48.333838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.860 qpair failed and we were unable to recover it. 00:27:14.118 [2024-07-15 20:52:48.343751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.118 [2024-07-15 20:52:48.343822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.118 [2024-07-15 20:52:48.343838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.118 [2024-07-15 20:52:48.343846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.118 [2024-07-15 20:52:48.343859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:14.118 [2024-07-15 20:52:48.343878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.118 qpair failed and we were unable to recover it. 00:27:14.118 [2024-07-15 20:52:48.353754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.118 [2024-07-15 20:52:48.353821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.119 [2024-07-15 20:52:48.353837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.119 [2024-07-15 20:52:48.353845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.119 [2024-07-15 20:52:48.353851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:14.119 [2024-07-15 20:52:48.353866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-07-15 20:52:48.363796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.119 [2024-07-15 20:52:48.363876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.119 [2024-07-15 20:52:48.363891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.119 [2024-07-15 20:52:48.363899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.119 [2024-07-15 20:52:48.363905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4840000b90 00:27:14.119 [2024-07-15 20:52:48.363920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-07-15 20:52:48.373818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.119 [2024-07-15 20:52:48.373902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.119 [2024-07-15 20:52:48.373929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.119 [2024-07-15 20:52:48.373939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.119 [2024-07-15 20:52:48.373949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4838000b90 00:27:14.119 [2024-07-15 20:52:48.373973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 [2024-07-15 20:52:48.383922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.119 [2024-07-15 20:52:48.384004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.119 [2024-07-15 20:52:48.384021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.119 [2024-07-15 20:52:48.384029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.119 [2024-07-15 20:52:48.384035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4838000b90 00:27:14.119 [2024-07-15 20:52:48.384050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:14.119 qpair failed and we were unable to recover it. 00:27:14.119 Controller properly reset. 00:27:14.119 Initializing NVMe Controllers 00:27:14.119 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:14.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:14.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:14.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:14.119 Initialization complete. Launching workers. 00:27:14.119 Starting thread on core 1 00:27:14.119 Starting thread on core 2 00:27:14.119 Starting thread on core 3 00:27:14.119 Starting thread on core 0 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:14.119 00:27:14.119 real 0m11.434s 00:27:14.119 user 0m21.172s 00:27:14.119 sys 0m4.221s 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.119 ************************************ 00:27:14.119 END TEST nvmf_target_disconnect_tc2 00:27:14.119 ************************************ 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.119 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.119 rmmod nvme_tcp 00:27:14.378 rmmod nvme_fabrics 00:27:14.378 rmmod nvme_keyring 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2847038 ']' 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2847038 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2847038 ']' 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2847038 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2847038 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2847038' 00:27:14.378 killing process with pid 2847038 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2847038 00:27:14.378 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2847038 00:27:14.637 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:14.637 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.637 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.637 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.637 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.637 20:52:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.637 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.637 20:52:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.542 20:52:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.542 00:27:16.542 real 0m19.663s 00:27:16.542 user 0m48.820s 00:27:16.542 sys 0m8.739s 00:27:16.542 20:52:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.542 20:52:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:16.542 ************************************ 00:27:16.542 END TEST nvmf_target_disconnect 00:27:16.542 ************************************ 00:27:16.542 20:52:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:16.542 20:52:50 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:16.542 20:52:50 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:16.542 20:52:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.803 20:52:51 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:16.803 00:27:16.803 real 20m55.803s 00:27:16.803 user 45m23.666s 00:27:16.803 sys 6m19.254s 00:27:16.803 20:52:51 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.803 20:52:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.803 ************************************ 00:27:16.803 END TEST nvmf_tcp 00:27:16.803 ************************************ 00:27:16.803 20:52:51 -- common/autotest_common.sh@1142 -- # return 0 00:27:16.803 20:52:51 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:16.803 20:52:51 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:16.803 20:52:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:16.803 20:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.803 20:52:51 -- common/autotest_common.sh@10 -- # set +x 00:27:16.803 ************************************ 00:27:16.803 START TEST spdkcli_nvmf_tcp 00:27:16.803 ************************************ 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:16.803 * Looking for test storage... 00:27:16.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2848667 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2848667 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2848667 ']' 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:16.803 20:52:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.803 [2024-07-15 20:52:51.242910] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:16.803 [2024-07-15 20:52:51.242959] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848667 ] 00:27:16.803 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.064 [2024-07-15 20:52:51.297253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:17.064 [2024-07-15 20:52:51.372889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.064 [2024-07-15 20:52:51.372892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.629 20:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:17.629 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:17.629 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:17.629 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:17.629 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:17.629 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:17.629 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:17.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:17.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:17.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:17.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:17.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:17.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:17.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:17.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:17.630 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:17.630 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:17.630 ' 00:27:20.162 [2024-07-15 20:52:54.481075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.536 [2024-07-15 20:52:55.656993] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:23.440 [2024-07-15 20:52:57.819869] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:25.344 [2024-07-15 20:52:59.677724] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:26.721 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:26.721 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:26.721 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:26.721 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:26.721 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:26.721 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:26.721 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:26.721 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:26.722 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:26.722 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:26.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:26.722 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:26.979 20:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:26.979 20:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:26.979 20:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.979 20:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:26.979 20:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:26.979 20:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.979 20:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:26.980 20:53:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.238 20:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:27.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:27.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:27.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:27.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:27.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:27.238 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:27.238 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:27.238 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:27.238 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:27.238 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:27.238 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:27.238 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:27.238 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:27.238 ' 00:27:32.505 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:32.505 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:32.505 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:32.505 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:32.505 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:32.505 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:32.505 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:32.505 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:32.505 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:32.505 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:32.505 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:32.505 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:32.506 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:32.506 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2848667 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2848667 ']' 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2848667 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2848667 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2848667' 00:27:32.506 killing process with pid 2848667 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2848667 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2848667 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2848667 ']' 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2848667 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2848667 ']' 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2848667 00:27:32.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2848667) - No such process 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2848667 is not found' 00:27:32.506 Process with pid 2848667 is not found 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:32.506 00:27:32.506 real 0m15.812s 00:27:32.506 user 0m32.860s 00:27:32.506 sys 0m0.682s 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.506 20:53:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.506 ************************************ 00:27:32.506 END TEST spdkcli_nvmf_tcp 00:27:32.506 ************************************ 00:27:32.506 20:53:06 -- common/autotest_common.sh@1142 -- # return 0 00:27:32.506 20:53:06 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:32.506 20:53:06 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:32.506 20:53:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.506 20:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:32.506 ************************************ 00:27:32.506 START TEST nvmf_identify_passthru 00:27:32.506 ************************************ 00:27:32.506 20:53:06 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:32.766 * Looking for test storage... 00:27:32.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:32.766 20:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.766 20:53:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.766 20:53:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.766 20:53:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.766 20:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.766 20:53:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.766 20:53:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.766 20:53:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:32.766 20:53:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.766 20:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.766 20:53:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:32.766 20:53:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.766 20:53:07 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.766 20:53:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:38.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:38.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:38.042 Found net devices under 0000:86:00.0: cvl_0_0 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:38.042 Found net devices under 0000:86:00.1: cvl_0_1 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.042 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.043 20:53:11 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:27:38.043 00:27:38.043 --- 10.0.0.2 ping statistics --- 00:27:38.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.043 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:27:38.043 00:27:38.043 --- 10.0.0.1 ping statistics --- 00:27:38.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.043 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.043 20:53:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.043 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:38.043 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:38.043 20:53:12 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:38.043 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:38.043 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:38.043 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:38.043 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:38.043 20:53:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:38.043 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.266 20:53:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:42.266 20:53:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:42.266 20:53:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:42.266 20:53:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:42.266 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.459 20:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:46.459 20:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:46.459 20:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:46.459 20:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:46.459 20:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2855510 00:27:46.459 20:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:46.459 20:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2855510 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2855510 ']' 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.459 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.460 20:53:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:46.460 [2024-07-15 20:53:20.589938] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:27:46.460 [2024-07-15 20:53:20.589984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.460 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.460 [2024-07-15 20:53:20.646387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.460 [2024-07-15 20:53:20.726231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.460 [2024-07-15 20:53:20.726269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.460 [2024-07-15 20:53:20.726276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.460 [2024-07-15 20:53:20.726282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.460 [2024-07-15 20:53:20.726287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.460 [2024-07-15 20:53:20.726322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.460 [2024-07-15 20:53:20.726422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.460 [2024-07-15 20:53:20.726498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.460 [2024-07-15 20:53:20.726499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.027 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.027 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:47.027 20:53:21 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:47.027 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.027 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.027 INFO: Log level set to 20 00:27:47.027 INFO: Requests: 00:27:47.027 { 00:27:47.027 "jsonrpc": "2.0", 00:27:47.027 "method": "nvmf_set_config", 00:27:47.027 "id": 1, 00:27:47.027 "params": { 00:27:47.027 "admin_cmd_passthru": { 00:27:47.027 "identify_ctrlr": true 00:27:47.027 } 00:27:47.027 } 00:27:47.027 } 00:27:47.027 00:27:47.027 INFO: response: 00:27:47.027 { 00:27:47.027 "jsonrpc": "2.0", 00:27:47.027 "id": 1, 00:27:47.027 "result": true 00:27:47.027 } 00:27:47.027 00:27:47.027 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.027 20:53:21 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:47.027 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.027 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.027 INFO: Setting log level to 20 00:27:47.027 INFO: Setting log level to 20 00:27:47.027 INFO: Log level set to 20 00:27:47.027 INFO: Log level set to 20 00:27:47.027 INFO: Requests: 00:27:47.027 { 00:27:47.027 "jsonrpc": "2.0", 00:27:47.027 "method": "framework_start_init", 00:27:47.027 "id": 1 00:27:47.027 } 00:27:47.027 00:27:47.027 INFO: Requests: 00:27:47.027 { 00:27:47.027 "jsonrpc": "2.0", 00:27:47.027 "method": "framework_start_init", 00:27:47.027 "id": 1 00:27:47.027 } 00:27:47.027 00:27:47.027 [2024-07-15 20:53:21.507697] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:47.292 INFO: response: 00:27:47.292 { 00:27:47.292 "jsonrpc": "2.0", 00:27:47.292 "id": 1, 00:27:47.292 "result": true 00:27:47.292 } 00:27:47.292 00:27:47.292 INFO: response: 00:27:47.292 { 00:27:47.292 "jsonrpc": "2.0", 00:27:47.292 "id": 1, 00:27:47.292 "result": true 00:27:47.292 } 00:27:47.292 00:27:47.292 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.292 20:53:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.292 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.292 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.292 INFO: Setting log level to 40 00:27:47.292 INFO: Setting log level to 40 00:27:47.292 INFO: Setting log level to 40 00:27:47.292 [2024-07-15 20:53:21.521068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.292 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.292 20:53:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:47.292 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.292 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.292 20:53:21 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:47.292 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.292 20:53:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:50.580 Nvme0n1 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:50.580 [2024-07-15 20:53:24.416201] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:50.580 [ 00:27:50.580 { 00:27:50.580 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:50.580 "subtype": "Discovery", 00:27:50.580 "listen_addresses": [], 00:27:50.580 "allow_any_host": true, 00:27:50.580 "hosts": [] 00:27:50.580 }, 00:27:50.580 { 00:27:50.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.580 "subtype": "NVMe", 00:27:50.580 "listen_addresses": [ 00:27:50.580 { 00:27:50.580 "trtype": "TCP", 00:27:50.580 "adrfam": "IPv4", 00:27:50.580 "traddr": "10.0.0.2", 00:27:50.580 "trsvcid": "4420" 00:27:50.580 } 00:27:50.580 ], 00:27:50.580 "allow_any_host": true, 00:27:50.580 "hosts": [], 00:27:50.580 "serial_number": "SPDK00000000000001", 00:27:50.580 "model_number": "SPDK bdev Controller", 00:27:50.580 "max_namespaces": 1, 00:27:50.580 "min_cntlid": 1, 00:27:50.580 "max_cntlid": 65519, 00:27:50.580 "namespaces": [ 00:27:50.580 { 00:27:50.580 "nsid": 1, 00:27:50.580 "bdev_name": "Nvme0n1", 00:27:50.580 "name": "Nvme0n1", 00:27:50.580 "nguid": "DF027CBC63A04B01B9095CF262B90CC4", 00:27:50.580 "uuid": "df027cbc-63a0-4b01-b909-5cf262b90cc4" 00:27:50.580 } 00:27:50.580 ] 00:27:50.580 } 00:27:50.580 ] 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:50.580 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:50.580 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:50.580 20:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:50.580 rmmod nvme_tcp 00:27:50.580 rmmod nvme_fabrics 00:27:50.580 rmmod nvme_keyring 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2855510 ']' 00:27:50.580 20:53:24 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2855510 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2855510 ']' 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2855510 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2855510 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2855510' 00:27:50.580 killing process with pid 2855510 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2855510 00:27:50.580 20:53:24 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2855510 00:27:51.958 20:53:26 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.958 20:53:26 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.958 20:53:26 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.958 20:53:26 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.958 20:53:26 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.958 20:53:26 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.958 20:53:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:51.958 20:53:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.494 20:53:28 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.494 00:27:54.494 real 0m21.429s 00:27:54.494 user 0m29.780s 00:27:54.494 sys 0m4.555s 00:27:54.494 20:53:28 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:54.494 20:53:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:54.494 ************************************ 00:27:54.494 END TEST nvmf_identify_passthru 00:27:54.494 ************************************ 00:27:54.494 20:53:28 -- common/autotest_common.sh@1142 -- # return 0 00:27:54.494 20:53:28 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:54.494 20:53:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:54.494 20:53:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.494 20:53:28 -- common/autotest_common.sh@10 -- # set +x 00:27:54.494 ************************************ 00:27:54.494 START TEST nvmf_dif 00:27:54.494 ************************************ 00:27:54.494 20:53:28 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:54.494 * Looking for test storage... 00:27:54.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:54.494 20:53:28 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.494 20:53:28 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.494 20:53:28 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.494 20:53:28 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.494 20:53:28 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.494 20:53:28 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.494 20:53:28 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.494 20:53:28 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:54.494 20:53:28 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:54.494 20:53:28 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:54.494 20:53:28 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:54.494 20:53:28 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:54.494 20:53:28 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:54.494 20:53:28 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.494 20:53:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:54.494 20:53:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:54.494 20:53:28 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:54.494 20:53:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.764 20:53:33 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:59.765 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:59.765 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:59.765 Found net devices under 0000:86:00.0: cvl_0_0 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:59.765 Found net devices under 0000:86:00.1: cvl_0_1 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:27:59.765 00:27:59.765 --- 10.0.0.2 ping statistics --- 00:27:59.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.765 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:27:59.765 00:27:59.765 --- 10.0.0.1 ping statistics --- 00:27:59.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.765 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:59.765 20:53:33 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:01.671 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:01.671 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:01.671 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:01.671 20:53:35 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.671 20:53:35 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.671 20:53:35 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.671 20:53:35 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.671 20:53:35 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.671 20:53:35 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.671 20:53:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:01.671 20:53:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:01.671 20:53:36 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.671 20:53:36 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:01.671 20:53:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.671 20:53:36 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2860922 00:28:01.671 20:53:36 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2860922 00:28:01.672 20:53:36 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:01.672 20:53:36 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2860922 ']' 00:28:01.672 20:53:36 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.672 20:53:36 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:01.672 20:53:36 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.672 20:53:36 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:01.672 20:53:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:01.672 [2024-07-15 20:53:36.069942] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:28:01.672 [2024-07-15 20:53:36.069987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.672 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.672 [2024-07-15 20:53:36.125262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.930 [2024-07-15 20:53:36.205010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.930 [2024-07-15 20:53:36.205044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.930 [2024-07-15 20:53:36.205051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.930 [2024-07-15 20:53:36.205057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.930 [2024-07-15 20:53:36.205062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.930 [2024-07-15 20:53:36.205080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:02.498 20:53:36 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.498 20:53:36 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.498 20:53:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:02.498 20:53:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.498 [2024-07-15 20:53:36.909250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.498 20:53:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.498 20:53:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.498 ************************************ 00:28:02.498 START TEST fio_dif_1_default 00:28:02.498 ************************************ 00:28:02.498 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:02.498 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:02.498 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:02.498 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.499 bdev_null0 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.499 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:02.760 [2024-07-15 20:53:36.981547] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.760 { 00:28:02.760 "params": { 00:28:02.760 "name": "Nvme$subsystem", 00:28:02.760 "trtype": "$TEST_TRANSPORT", 00:28:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.760 "adrfam": "ipv4", 00:28:02.760 "trsvcid": "$NVMF_PORT", 00:28:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.760 "hdgst": ${hdgst:-false}, 00:28:02.760 "ddgst": ${ddgst:-false} 00:28:02.760 }, 00:28:02.760 "method": "bdev_nvme_attach_controller" 00:28:02.760 } 00:28:02.760 EOF 00:28:02.760 )") 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:02.760 20:53:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.760 "params": { 00:28:02.760 "name": "Nvme0", 00:28:02.760 "trtype": "tcp", 00:28:02.760 "traddr": "10.0.0.2", 00:28:02.760 "adrfam": "ipv4", 00:28:02.760 "trsvcid": "4420", 00:28:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.760 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.760 "hdgst": false, 00:28:02.760 "ddgst": false 00:28:02.760 }, 00:28:02.760 "method": "bdev_nvme_attach_controller" 00:28:02.760 }' 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:02.760 20:53:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.112 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:03.112 fio-3.35 00:28:03.112 Starting 1 thread 00:28:03.112 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.328 00:28:15.329 filename0: (groupid=0, jobs=1): err= 0: pid=2861373: Mon Jul 15 20:53:47 2024 00:28:15.329 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10006msec) 00:28:15.329 slat (nsec): min=4282, max=22452, avg=6138.75, stdev=818.93 00:28:15.329 clat (usec): min=508, max=48330, avg=21047.01, stdev=20371.46 00:28:15.329 lat (usec): min=514, max=48343, avg=21053.15, stdev=20371.46 00:28:15.329 clat percentiles (usec): 00:28:15.329 | 1.00th=[ 603], 5.00th=[ 611], 10.00th=[ 611], 20.00th=[ 619], 00:28:15.329 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[41157], 60.00th=[41157], 00:28:15.329 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:15.329 | 99.00th=[41681], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:28:15.329 | 99.99th=[48497] 00:28:15.329 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:28:15.329 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:28:15.329 lat (usec) : 750=49.11%, 1000=0.79% 00:28:15.329 lat (msec) : 50=50.11% 00:28:15.329 cpu : usr=94.45%, sys=5.30%, ctx=12, majf=0, minf=215 00:28:15.329 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.329 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.329 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:15.329 00:28:15.329 Run status group 0 (all jobs): 00:28:15.329 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10006-10006msec 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 00:28:15.329 real 0m11.110s 00:28:15.329 user 0m16.145s 00:28:15.329 sys 0m0.801s 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 ************************************ 00:28:15.329 END TEST fio_dif_1_default 00:28:15.329 ************************************ 00:28:15.329 20:53:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:15.329 20:53:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:15.329 20:53:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:15.329 20:53:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 ************************************ 00:28:15.329 START TEST fio_dif_1_multi_subsystems 00:28:15.329 ************************************ 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 bdev_null0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 [2024-07-15 20:53:48.162343] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 bdev_null1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:15.329 { 00:28:15.329 "params": { 00:28:15.329 "name": "Nvme$subsystem", 00:28:15.329 "trtype": "$TEST_TRANSPORT", 00:28:15.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.329 "adrfam": "ipv4", 00:28:15.329 "trsvcid": "$NVMF_PORT", 00:28:15.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.329 "hdgst": ${hdgst:-false}, 00:28:15.329 "ddgst": ${ddgst:-false} 00:28:15.329 }, 00:28:15.329 "method": "bdev_nvme_attach_controller" 00:28:15.329 } 00:28:15.329 EOF 00:28:15.329 )") 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:15.329 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:15.330 { 00:28:15.330 "params": { 00:28:15.330 "name": "Nvme$subsystem", 00:28:15.330 "trtype": "$TEST_TRANSPORT", 00:28:15.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.330 "adrfam": "ipv4", 00:28:15.330 "trsvcid": "$NVMF_PORT", 00:28:15.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.330 "hdgst": ${hdgst:-false}, 00:28:15.330 "ddgst": ${ddgst:-false} 00:28:15.330 }, 00:28:15.330 "method": "bdev_nvme_attach_controller" 00:28:15.330 } 00:28:15.330 EOF 00:28:15.330 )") 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:15.330 "params": { 00:28:15.330 "name": "Nvme0", 00:28:15.330 "trtype": "tcp", 00:28:15.330 "traddr": "10.0.0.2", 00:28:15.330 "adrfam": "ipv4", 00:28:15.330 "trsvcid": "4420", 00:28:15.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.330 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.330 "hdgst": false, 00:28:15.330 "ddgst": false 00:28:15.330 }, 00:28:15.330 "method": "bdev_nvme_attach_controller" 00:28:15.330 },{ 00:28:15.330 "params": { 00:28:15.330 "name": "Nvme1", 00:28:15.330 "trtype": "tcp", 00:28:15.330 "traddr": "10.0.0.2", 00:28:15.330 "adrfam": "ipv4", 00:28:15.330 "trsvcid": "4420", 00:28:15.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:15.330 "hdgst": false, 00:28:15.330 "ddgst": false 00:28:15.330 }, 00:28:15.330 "method": "bdev_nvme_attach_controller" 00:28:15.330 }' 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:15.330 20:53:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.330 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:15.330 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:15.330 fio-3.35 00:28:15.330 Starting 2 threads 00:28:15.330 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.309 00:28:25.309 filename0: (groupid=0, jobs=1): err= 0: pid=2863366: Mon Jul 15 20:53:59 2024 00:28:25.309 read: IOPS=171, BW=684KiB/s (701kB/s)(6864KiB/10028msec) 00:28:25.309 slat (nsec): min=6032, max=26336, avg=7279.72, stdev=2215.20 00:28:25.309 clat (usec): min=423, max=42545, avg=23354.29, stdev=20207.55 00:28:25.309 lat (usec): min=429, max=42552, avg=23361.57, stdev=20207.05 00:28:25.309 clat percentiles (usec): 00:28:25.309 | 1.00th=[ 433], 5.00th=[ 441], 10.00th=[ 453], 20.00th=[ 627], 00:28:25.309 | 30.00th=[ 644], 40.00th=[ 750], 50.00th=[41157], 60.00th=[41157], 00:28:25.309 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:28:25.309 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:28:25.309 | 99.99th=[42730] 00:28:25.309 bw ( KiB/s): min= 512, max= 768, per=47.42%, avg=684.80, stdev=85.87, samples=20 00:28:25.310 iops : min= 128, max= 192, avg=171.20, stdev=21.47, samples=20 00:28:25.310 lat (usec) : 500=16.61%, 750=23.43%, 1000=4.02% 00:28:25.310 lat (msec) : 50=55.94% 00:28:25.310 cpu : usr=97.49%, sys=2.26%, ctx=15, majf=0, minf=154 00:28:25.310 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.310 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.310 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:25.310 filename1: (groupid=0, jobs=1): err= 0: pid=2863367: Mon Jul 15 20:53:59 2024 00:28:25.310 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:28:25.310 slat (nsec): min=6025, max=27009, avg=7216.61, stdev=2035.57 00:28:25.310 clat (usec): min=609, max=42335, avg=21035.75, stdev=20337.04 00:28:25.310 lat (usec): min=616, max=42342, avg=21042.97, stdev=20336.44 00:28:25.310 clat percentiles (usec): 00:28:25.310 | 1.00th=[ 619], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 635], 00:28:25.310 | 30.00th=[ 644], 40.00th=[ 701], 50.00th=[40633], 60.00th=[41157], 00:28:25.310 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:25.310 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:28:25.310 | 99.99th=[42206] 00:28:25.310 bw ( KiB/s): min= 704, max= 768, per=52.76%, avg=761.26, stdev=20.18, samples=19 00:28:25.310 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:28:25.310 lat (usec) : 750=45.58%, 1000=3.68% 00:28:25.310 lat (msec) : 2=0.63%, 50=50.11% 00:28:25.310 cpu : usr=97.44%, sys=2.32%, ctx=10, majf=0, minf=100 00:28:25.310 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.310 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.310 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:25.310 00:28:25.310 Run status group 0 (all jobs): 00:28:25.310 READ: bw=1442KiB/s (1477kB/s), 684KiB/s-760KiB/s (701kB/s-778kB/s), io=14.1MiB (14.8MB), run=10002-10028msec 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.310 00:28:25.310 real 0m11.531s 00:28:25.310 user 0m26.354s 00:28:25.310 sys 0m0.741s 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 ************************************ 00:28:25.310 END TEST fio_dif_1_multi_subsystems 00:28:25.310 ************************************ 00:28:25.310 20:53:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:25.310 20:53:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:25.310 20:53:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:25.310 20:53:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 ************************************ 00:28:25.310 START TEST fio_dif_rand_params 00:28:25.310 ************************************ 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 bdev_null0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:25.310 [2024-07-15 20:53:59.763327] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:25.310 { 00:28:25.310 "params": { 00:28:25.310 "name": "Nvme$subsystem", 00:28:25.310 "trtype": "$TEST_TRANSPORT", 00:28:25.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.310 "adrfam": "ipv4", 00:28:25.310 "trsvcid": "$NVMF_PORT", 00:28:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.310 "hdgst": ${hdgst:-false}, 00:28:25.310 "ddgst": ${ddgst:-false} 00:28:25.310 }, 00:28:25.310 "method": "bdev_nvme_attach_controller" 00:28:25.310 } 00:28:25.310 EOF 00:28:25.310 )") 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:25.310 20:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:25.310 "params": { 00:28:25.310 "name": "Nvme0", 00:28:25.310 "trtype": "tcp", 00:28:25.310 "traddr": "10.0.0.2", 00:28:25.310 "adrfam": "ipv4", 00:28:25.310 "trsvcid": "4420", 00:28:25.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:25.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:25.310 "hdgst": false, 00:28:25.310 "ddgst": false 00:28:25.310 }, 00:28:25.310 "method": "bdev_nvme_attach_controller" 00:28:25.310 }' 00:28:25.596 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:25.596 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:25.596 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.596 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:25.597 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.597 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:25.597 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:25.597 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:25.597 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:25.597 20:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:25.855 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:25.855 ... 00:28:25.855 fio-3.35 00:28:25.855 Starting 3 threads 00:28:25.855 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.397 00:28:32.397 filename0: (groupid=0, jobs=1): err= 0: pid=2865274: Mon Jul 15 20:54:05 2024 00:28:32.397 read: IOPS=277, BW=34.6MiB/s (36.3MB/s)(174MiB/5014msec) 00:28:32.397 slat (nsec): min=6250, max=38013, avg=9519.16, stdev=2721.07 00:28:32.397 clat (usec): min=3604, max=94785, avg=10815.22, stdev=12442.03 00:28:32.397 lat (usec): min=3611, max=94792, avg=10824.74, stdev=12442.31 00:28:32.397 clat percentiles (usec): 00:28:32.397 | 1.00th=[ 4113], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 5080], 00:28:32.397 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7242], 60.00th=[ 7635], 00:28:32.397 | 70.00th=[ 8356], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[48497], 00:28:32.397 | 99.00th=[51643], 99.50th=[52691], 99.90th=[94897], 99.95th=[94897], 00:28:32.397 | 99.99th=[94897] 00:28:32.397 bw ( KiB/s): min=24015, max=54272, per=36.07%, avg=35476.70, stdev=9709.08, samples=10 00:28:32.397 iops : min= 187, max= 424, avg=277.10, stdev=75.93, samples=10 00:28:32.397 lat (msec) : 4=0.36%, 10=82.22%, 20=8.78%, 50=5.54%, 100=3.10% 00:28:32.397 cpu : usr=94.93%, sys=4.73%, ctx=8, majf=0, minf=137 00:28:32.397 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.397 issued rwts: total=1389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:32.397 filename0: (groupid=0, jobs=1): err= 0: pid=2865275: Mon Jul 15 20:54:05 2024 00:28:32.397 read: IOPS=229, BW=28.6MiB/s (30.0MB/s)(143MiB/5002msec) 00:28:32.397 slat (nsec): min=6226, max=24971, avg=9498.03, stdev=2741.32 00:28:32.397 clat (usec): min=3937, max=52910, avg=13079.03, stdev=14263.07 00:28:32.397 lat (usec): min=3944, max=52922, avg=13088.53, stdev=14263.31 00:28:32.397 clat percentiles (usec): 00:28:32.397 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4883], 20.00th=[ 5997], 00:28:32.397 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7701], 60.00th=[ 8455], 00:28:32.397 | 70.00th=[ 9634], 80.00th=[10814], 90.00th=[47973], 95.00th=[50070], 00:28:32.397 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:28:32.397 | 99.99th=[52691] 00:28:32.397 bw ( KiB/s): min=22272, max=46080, per=29.07%, avg=28591.78, stdev=7347.60, samples=9 00:28:32.397 iops : min= 174, max= 360, avg=223.33, stdev=57.44, samples=9 00:28:32.397 lat (msec) : 4=0.09%, 10=72.95%, 20=13.87%, 50=7.68%, 100=5.41% 00:28:32.397 cpu : usr=95.56%, sys=4.12%, ctx=12, majf=0, minf=76 00:28:32.397 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.397 issued rwts: total=1146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:32.397 filename0: (groupid=0, jobs=1): err= 0: pid=2865276: Mon Jul 15 20:54:05 2024 00:28:32.397 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5002msec) 00:28:32.397 slat (nsec): min=6221, max=32225, avg=9242.13, stdev=2886.63 00:28:32.397 clat (usec): min=3999, max=91692, avg=11369.89, stdev=13640.68 00:28:32.397 lat (usec): min=4006, max=91703, avg=11379.13, stdev=13640.89 00:28:32.397 clat percentiles (usec): 00:28:32.397 | 1.00th=[ 4424], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5145], 00:28:32.397 | 30.00th=[ 6063], 40.00th=[ 6652], 50.00th=[ 6980], 60.00th=[ 7373], 00:28:32.397 | 70.00th=[ 8160], 80.00th=[ 8979], 90.00th=[46400], 95.00th=[48497], 00:28:32.397 | 99.00th=[50594], 99.50th=[52167], 99.90th=[90702], 99.95th=[91751], 00:28:32.397 | 99.99th=[91751] 00:28:32.397 bw ( KiB/s): min=25088, max=41472, per=34.09%, avg=33536.00, stdev=5545.52, samples=9 00:28:32.397 iops : min= 196, max= 324, avg=262.00, stdev=43.32, samples=9 00:28:32.397 lat (msec) : 4=0.08%, 10=87.33%, 20=2.12%, 50=8.95%, 100=1.52% 00:28:32.397 cpu : usr=95.32%, sys=4.34%, ctx=13, majf=0, minf=93 00:28:32.397 IO depths : 1=3.1%, 2=96.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:32.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.397 issued rwts: total=1318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:32.397 00:28:32.397 Run status group 0 (all jobs): 00:28:32.397 READ: bw=96.1MiB/s (101MB/s), 28.6MiB/s-34.6MiB/s (30.0MB/s-36.3MB/s), io=482MiB (505MB), run=5002-5014msec 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:32.397 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 bdev_null0 00:28:32.398 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:32.398 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 [2024-07-15 20:54:06.015618] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 bdev_null1 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 bdev_null2 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:32.398 { 00:28:32.398 "params": { 00:28:32.398 "name": "Nvme$subsystem", 00:28:32.398 "trtype": "$TEST_TRANSPORT", 00:28:32.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.398 "adrfam": "ipv4", 00:28:32.398 "trsvcid": "$NVMF_PORT", 00:28:32.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.398 "hdgst": ${hdgst:-false}, 00:28:32.398 "ddgst": ${ddgst:-false} 00:28:32.398 }, 00:28:32.398 "method": "bdev_nvme_attach_controller" 00:28:32.398 } 00:28:32.398 EOF 00:28:32.398 )") 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:32.398 { 00:28:32.398 "params": { 00:28:32.398 "name": "Nvme$subsystem", 00:28:32.398 "trtype": "$TEST_TRANSPORT", 00:28:32.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.398 "adrfam": "ipv4", 00:28:32.398 "trsvcid": "$NVMF_PORT", 00:28:32.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.398 "hdgst": ${hdgst:-false}, 00:28:32.398 "ddgst": ${ddgst:-false} 00:28:32.398 }, 00:28:32.398 "method": "bdev_nvme_attach_controller" 00:28:32.398 } 00:28:32.398 EOF 00:28:32.398 )") 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:32.398 { 00:28:32.398 "params": { 00:28:32.398 "name": "Nvme$subsystem", 00:28:32.398 "trtype": "$TEST_TRANSPORT", 00:28:32.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:32.398 "adrfam": "ipv4", 00:28:32.398 "trsvcid": "$NVMF_PORT", 00:28:32.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:32.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:32.398 "hdgst": ${hdgst:-false}, 00:28:32.398 "ddgst": ${ddgst:-false} 00:28:32.398 }, 00:28:32.398 "method": "bdev_nvme_attach_controller" 00:28:32.398 } 00:28:32.398 EOF 00:28:32.398 )") 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:32.398 "params": { 00:28:32.398 "name": "Nvme0", 00:28:32.398 "trtype": "tcp", 00:28:32.398 "traddr": "10.0.0.2", 00:28:32.398 "adrfam": "ipv4", 00:28:32.398 "trsvcid": "4420", 00:28:32.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:32.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:32.398 "hdgst": false, 00:28:32.398 "ddgst": false 00:28:32.398 }, 00:28:32.398 "method": "bdev_nvme_attach_controller" 00:28:32.398 },{ 00:28:32.398 "params": { 00:28:32.398 "name": "Nvme1", 00:28:32.398 "trtype": "tcp", 00:28:32.398 "traddr": "10.0.0.2", 00:28:32.398 "adrfam": "ipv4", 00:28:32.398 "trsvcid": "4420", 00:28:32.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:32.398 "hdgst": false, 00:28:32.398 "ddgst": false 00:28:32.398 }, 00:28:32.398 "method": "bdev_nvme_attach_controller" 00:28:32.398 },{ 00:28:32.398 "params": { 00:28:32.398 "name": "Nvme2", 00:28:32.398 "trtype": "tcp", 00:28:32.398 "traddr": "10.0.0.2", 00:28:32.398 "adrfam": "ipv4", 00:28:32.398 "trsvcid": "4420", 00:28:32.398 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:32.398 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:32.398 "hdgst": false, 00:28:32.398 "ddgst": false 00:28:32.398 }, 00:28:32.398 "method": "bdev_nvme_attach_controller" 00:28:32.398 }' 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:32.398 20:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.398 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.398 ... 00:28:32.398 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.398 ... 00:28:32.398 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:32.398 ... 00:28:32.398 fio-3.35 00:28:32.398 Starting 24 threads 00:28:32.398 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.600 00:28:44.600 filename0: (groupid=0, jobs=1): err= 0: pid=2866623: Mon Jul 15 20:54:17 2024 00:28:44.600 read: IOPS=618, BW=2474KiB/s (2533kB/s)(24.2MiB/10011msec) 00:28:44.600 slat (nsec): min=6074, max=64845, avg=11586.75, stdev=5774.90 00:28:44.600 clat (usec): min=3323, max=29296, avg=25757.69, stdev=2253.28 00:28:44.600 lat (usec): min=3339, max=29312, avg=25769.28, stdev=2253.26 00:28:44.600 clat percentiles (usec): 00:28:44.600 | 1.00th=[11994], 5.00th=[25035], 10.00th=[25297], 20.00th=[25297], 00:28:44.600 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25560], 00:28:44.600 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:28:44.600 | 99.00th=[28181], 99.50th=[28443], 99.90th=[29230], 99.95th=[29230], 00:28:44.600 | 99.99th=[29230] 00:28:44.600 bw ( KiB/s): min= 2304, max= 2810, per=4.20%, avg=2471.84, stdev=120.23, samples=19 00:28:44.600 iops : min= 576, max= 702, avg=617.89, stdev=30.00, samples=19 00:28:44.600 lat (msec) : 4=0.48%, 10=0.29%, 20=0.26%, 50=98.97% 00:28:44.600 cpu : usr=98.62%, sys=0.91%, ctx=58, majf=0, minf=65 00:28:44.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.600 filename0: (groupid=0, jobs=1): err= 0: pid=2866624: Mon Jul 15 20:54:17 2024 00:28:44.600 read: IOPS=612, BW=2448KiB/s (2507kB/s)(23.9MiB/10013msec) 00:28:44.600 slat (nsec): min=11616, max=97372, avg=47856.59, stdev=18985.20 00:28:44.600 clat (usec): min=22172, max=47618, avg=25744.90, stdev=1492.26 00:28:44.600 lat (usec): min=22248, max=47638, avg=25792.75, stdev=1489.97 00:28:44.600 clat percentiles (usec): 00:28:44.600 | 1.00th=[24511], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:44.600 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.600 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:28:44.600 | 99.00th=[28181], 99.50th=[29230], 99.90th=[47449], 99.95th=[47449], 00:28:44.600 | 99.99th=[47449] 00:28:44.600 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=94.40, samples=19 00:28:44.600 iops : min= 576, max= 640, avg=611.37, stdev=23.60, samples=19 00:28:44.600 lat (msec) : 50=100.00% 00:28:44.600 cpu : usr=98.55%, sys=0.85%, ctx=207, majf=0, minf=27 00:28:44.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.600 filename0: (groupid=0, jobs=1): err= 0: pid=2866625: Mon Jul 15 20:54:17 2024 00:28:44.600 read: IOPS=613, BW=2454KiB/s (2513kB/s)(24.0MiB/10015msec) 00:28:44.600 slat (nsec): min=6685, max=92654, avg=36943.21, stdev=19101.16 00:28:44.600 clat (usec): min=15228, max=39339, avg=25773.52, stdev=1502.89 00:28:44.600 lat (usec): min=15298, max=39354, avg=25810.46, stdev=1501.25 00:28:44.600 clat percentiles (usec): 00:28:44.600 | 1.00th=[21627], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.600 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.600 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:28:44.600 | 99.00th=[28443], 99.50th=[34866], 99.90th=[35390], 99.95th=[39060], 00:28:44.600 | 99.99th=[39584] 00:28:44.600 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=71.21, samples=19 00:28:44.600 iops : min= 576, max= 640, avg=611.37, stdev=17.80, samples=19 00:28:44.600 lat (msec) : 20=0.81%, 50=99.19% 00:28:44.600 cpu : usr=99.06%, sys=0.56%, ctx=7, majf=0, minf=28 00:28:44.600 IO depths : 1=5.8%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.600 filename0: (groupid=0, jobs=1): err= 0: pid=2866626: Mon Jul 15 20:54:17 2024 00:28:44.600 read: IOPS=612, BW=2449KiB/s (2507kB/s)(23.9MiB/10011msec) 00:28:44.600 slat (nsec): min=5213, max=80431, avg=39502.29, stdev=13467.37 00:28:44.600 clat (usec): min=18170, max=51691, avg=25804.36, stdev=1667.35 00:28:44.600 lat (usec): min=18185, max=51708, avg=25843.86, stdev=1666.58 00:28:44.600 clat percentiles (usec): 00:28:44.600 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.600 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.600 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:28:44.600 | 99.00th=[28181], 99.50th=[28705], 99.90th=[51643], 99.95th=[51643], 00:28:44.600 | 99.99th=[51643] 00:28:44.600 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.21, stdev=72.65, samples=19 00:28:44.600 iops : min= 576, max= 640, avg=611.26, stdev=18.17, samples=19 00:28:44.600 lat (msec) : 20=0.26%, 50=99.48%, 100=0.26% 00:28:44.600 cpu : usr=98.92%, sys=0.69%, ctx=36, majf=0, minf=24 00:28:44.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.600 filename0: (groupid=0, jobs=1): err= 0: pid=2866627: Mon Jul 15 20:54:17 2024 00:28:44.600 read: IOPS=612, BW=2451KiB/s (2510kB/s)(23.9MiB/10002msec) 00:28:44.600 slat (nsec): min=6169, max=93489, avg=40055.16, stdev=20982.42 00:28:44.600 clat (usec): min=11307, max=43028, avg=25722.02, stdev=1567.70 00:28:44.600 lat (usec): min=11360, max=43047, avg=25762.07, stdev=1567.54 00:28:44.600 clat percentiles (usec): 00:28:44.600 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.600 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.600 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:28:44.600 | 99.00th=[28181], 99.50th=[33817], 99.90th=[42730], 99.95th=[43254], 00:28:44.600 | 99.99th=[43254] 00:28:44.600 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.42, stdev=94.11, samples=19 00:28:44.600 iops : min= 576, max= 640, avg=611.32, stdev=23.54, samples=19 00:28:44.600 lat (msec) : 20=0.26%, 50=99.74% 00:28:44.600 cpu : usr=99.12%, sys=0.49%, ctx=13, majf=0, minf=34 00:28:44.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.600 filename0: (groupid=0, jobs=1): err= 0: pid=2866628: Mon Jul 15 20:54:17 2024 00:28:44.600 read: IOPS=612, BW=2451KiB/s (2510kB/s)(23.9MiB/10001msec) 00:28:44.600 slat (nsec): min=6403, max=80372, avg=39909.99, stdev=14341.56 00:28:44.600 clat (usec): min=16024, max=50292, avg=25757.84, stdev=1348.29 00:28:44.600 lat (usec): min=16034, max=50309, avg=25797.75, stdev=1348.31 00:28:44.600 clat percentiles (usec): 00:28:44.600 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.600 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.600 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27657], 00:28:44.600 | 99.00th=[28181], 99.50th=[28705], 99.90th=[41681], 99.95th=[41681], 00:28:44.600 | 99.99th=[50070] 00:28:44.600 bw ( KiB/s): min= 2299, max= 2560, per=4.15%, avg=2444.95, stdev=94.86, samples=19 00:28:44.600 iops : min= 574, max= 640, avg=611.16, stdev=23.79, samples=19 00:28:44.600 lat (msec) : 20=0.29%, 50=99.67%, 100=0.03% 00:28:44.600 cpu : usr=99.02%, sys=0.58%, ctx=24, majf=0, minf=37 00:28:44.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.600 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.600 filename0: (groupid=0, jobs=1): err= 0: pid=2866629: Mon Jul 15 20:54:17 2024 00:28:44.600 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10004msec) 00:28:44.600 slat (usec): min=7, max=112, avg=39.35, stdev=19.88 00:28:44.600 clat (usec): min=17592, max=47251, avg=25796.88, stdev=1270.54 00:28:44.600 lat (usec): min=17601, max=47275, avg=25836.23, stdev=1268.61 00:28:44.600 clat percentiles (usec): 00:28:44.600 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.600 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.600 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:28:44.600 | 99.00th=[28181], 99.50th=[29492], 99.90th=[40109], 99.95th=[40633], 00:28:44.600 | 99.99th=[47449] 00:28:44.600 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=84.20, samples=19 00:28:44.600 iops : min= 576, max= 640, avg=611.37, stdev=21.05, samples=19 00:28:44.600 lat (msec) : 20=0.03%, 50=99.97% 00:28:44.600 cpu : usr=97.13%, sys=1.51%, ctx=86, majf=0, minf=23 00:28:44.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename0: (groupid=0, jobs=1): err= 0: pid=2866630: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10004msec) 00:28:44.601 slat (nsec): min=4828, max=84216, avg=41482.35, stdev=17080.96 00:28:44.601 clat (usec): min=6129, max=45821, avg=25668.90, stdev=1899.69 00:28:44.601 lat (usec): min=6143, max=45835, avg=25710.38, stdev=1900.37 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.601 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.601 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27657], 00:28:44.601 | 99.00th=[28181], 99.50th=[29492], 99.90th=[45876], 99.95th=[45876], 00:28:44.601 | 99.99th=[45876] 00:28:44.601 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.68, stdev=94.07, samples=19 00:28:44.601 iops : min= 576, max= 640, avg=611.42, stdev=23.52, samples=19 00:28:44.601 lat (msec) : 10=0.26%, 20=0.36%, 50=99.38% 00:28:44.601 cpu : usr=98.50%, sys=0.80%, ctx=32, majf=0, minf=28 00:28:44.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename1: (groupid=0, jobs=1): err= 0: pid=2866631: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=613, BW=2455KiB/s (2514kB/s)(24.0MiB/10005msec) 00:28:44.601 slat (nsec): min=4675, max=83415, avg=23856.30, stdev=17424.68 00:28:44.601 clat (usec): min=5496, max=39113, avg=25896.62, stdev=2792.32 00:28:44.601 lat (usec): min=5503, max=39127, avg=25920.48, stdev=2792.76 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[19792], 5.00th=[21365], 10.00th=[22938], 20.00th=[25035], 00:28:44.601 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:28:44.601 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28181], 95.00th=[30540], 00:28:44.601 | 99.00th=[35914], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:28:44.601 | 99.99th=[39060] 00:28:44.601 bw ( KiB/s): min= 2208, max= 2560, per=4.15%, avg=2442.11, stdev=93.17, samples=19 00:28:44.601 iops : min= 552, max= 640, avg=610.53, stdev=23.29, samples=19 00:28:44.601 lat (msec) : 10=0.26%, 20=1.12%, 50=98.62% 00:28:44.601 cpu : usr=98.38%, sys=0.88%, ctx=31, majf=0, minf=24 00:28:44.601 IO depths : 1=2.4%, 2=5.1%, 4=12.3%, 8=67.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=91.3%, 8=5.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename1: (groupid=0, jobs=1): err= 0: pid=2866632: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10004msec) 00:28:44.601 slat (nsec): min=7170, max=87108, avg=35567.75, stdev=16304.37 00:28:44.601 clat (usec): min=22827, max=40309, avg=25796.14, stdev=1216.09 00:28:44.601 lat (usec): min=22861, max=40349, avg=25831.70, stdev=1216.05 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.601 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.601 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:28:44.601 | 99.00th=[28443], 99.50th=[28967], 99.90th=[40109], 99.95th=[40109], 00:28:44.601 | 99.99th=[40109] 00:28:44.601 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=84.20, samples=19 00:28:44.601 iops : min= 576, max= 640, avg=611.37, stdev=21.05, samples=19 00:28:44.601 lat (msec) : 50=100.00% 00:28:44.601 cpu : usr=97.94%, sys=1.29%, ctx=42, majf=0, minf=29 00:28:44.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename1: (groupid=0, jobs=1): err= 0: pid=2866633: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10005msec) 00:28:44.601 slat (nsec): min=7164, max=93878, avg=39506.07, stdev=19738.23 00:28:44.601 clat (usec): min=22871, max=40518, avg=25740.33, stdev=1232.98 00:28:44.601 lat (usec): min=22889, max=40537, avg=25779.84, stdev=1233.21 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.601 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.601 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:28:44.601 | 99.00th=[28181], 99.50th=[28967], 99.90th=[40633], 99.95th=[40633], 00:28:44.601 | 99.99th=[40633] 00:28:44.601 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=84.20, samples=19 00:28:44.601 iops : min= 576, max= 640, avg=611.37, stdev=21.05, samples=19 00:28:44.601 lat (msec) : 50=100.00% 00:28:44.601 cpu : usr=94.02%, sys=3.04%, ctx=889, majf=0, minf=39 00:28:44.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename1: (groupid=0, jobs=1): err= 0: pid=2866634: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=618, BW=2476KiB/s (2535kB/s)(24.2MiB/10004msec) 00:28:44.601 slat (nsec): min=6755, max=94218, avg=34164.09, stdev=22081.84 00:28:44.601 clat (usec): min=3314, max=29785, avg=25582.59, stdev=2335.61 00:28:44.601 lat (usec): min=3323, max=29800, avg=25616.75, stdev=2335.72 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[ 9503], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:44.601 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:28:44.601 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:28:44.601 | 99.00th=[28181], 99.50th=[28443], 99.90th=[29754], 99.95th=[29754], 00:28:44.601 | 99.99th=[29754] 00:28:44.601 bw ( KiB/s): min= 2304, max= 2944, per=4.20%, avg=2472.11, stdev=141.65, samples=19 00:28:44.601 iops : min= 576, max= 736, avg=618.00, stdev=35.40, samples=19 00:28:44.601 lat (msec) : 4=0.48%, 10=0.55%, 20=0.26%, 50=98.71% 00:28:44.601 cpu : usr=98.43%, sys=0.91%, ctx=105, majf=0, minf=27 00:28:44.601 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename1: (groupid=0, jobs=1): err= 0: pid=2866635: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=612, BW=2449KiB/s (2508kB/s)(23.9MiB/10010msec) 00:28:44.601 slat (nsec): min=6768, max=95429, avg=46203.66, stdev=19866.22 00:28:44.601 clat (usec): min=16890, max=53752, avg=25761.17, stdev=1481.99 00:28:44.601 lat (usec): min=16899, max=53772, avg=25807.37, stdev=1479.12 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[24511], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:44.601 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.601 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:28:44.601 | 99.00th=[28181], 99.50th=[29230], 99.90th=[45876], 99.95th=[45876], 00:28:44.601 | 99.99th=[53740] 00:28:44.601 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=84.20, samples=19 00:28:44.601 iops : min= 576, max= 640, avg=611.37, stdev=21.05, samples=19 00:28:44.601 lat (msec) : 20=0.03%, 50=99.93%, 100=0.03% 00:28:44.601 cpu : usr=97.03%, sys=1.54%, ctx=131, majf=0, minf=25 00:28:44.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename1: (groupid=0, jobs=1): err= 0: pid=2866636: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=613, BW=2455KiB/s (2514kB/s)(24.0MiB/10012msec) 00:28:44.601 slat (usec): min=6, max=137, avg=40.79, stdev=15.37 00:28:44.601 clat (usec): min=10269, max=39721, avg=25714.73, stdev=1483.49 00:28:44.601 lat (usec): min=10283, max=39739, avg=25755.53, stdev=1484.69 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.601 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.601 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27657], 00:28:44.601 | 99.00th=[28181], 99.50th=[28967], 99.90th=[39584], 99.95th=[39584], 00:28:44.601 | 99.99th=[39584] 00:28:44.601 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.42, stdev=94.11, samples=19 00:28:44.601 iops : min= 576, max= 640, avg=611.32, stdev=23.54, samples=19 00:28:44.601 lat (msec) : 20=0.62%, 50=99.38% 00:28:44.601 cpu : usr=98.28%, sys=1.04%, ctx=37, majf=0, minf=27 00:28:44.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename1: (groupid=0, jobs=1): err= 0: pid=2866637: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10004msec) 00:28:44.601 slat (nsec): min=4376, max=89752, avg=38527.75, stdev=20272.82 00:28:44.601 clat (usec): min=6577, max=39044, avg=25668.30, stdev=1574.68 00:28:44.601 lat (usec): min=6587, max=39059, avg=25706.83, stdev=1576.10 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[24249], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.601 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.601 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27657], 00:28:44.601 | 99.00th=[28181], 99.50th=[28967], 99.90th=[39060], 99.95th=[39060], 00:28:44.601 | 99.99th=[39060] 00:28:44.601 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=84.20, samples=19 00:28:44.601 iops : min= 576, max= 640, avg=611.37, stdev=21.05, samples=19 00:28:44.601 lat (msec) : 10=0.26%, 20=0.26%, 50=99.48% 00:28:44.601 cpu : usr=98.97%, sys=0.60%, ctx=64, majf=0, minf=32 00:28:44.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename1: (groupid=0, jobs=1): err= 0: pid=2866638: Mon Jul 15 20:54:17 2024 00:28:44.601 read: IOPS=612, BW=2449KiB/s (2507kB/s)(23.9MiB/10011msec) 00:28:44.601 slat (nsec): min=4853, max=92486, avg=41490.37, stdev=17893.06 00:28:44.601 clat (usec): min=18012, max=51744, avg=25759.78, stdev=1684.49 00:28:44.601 lat (usec): min=18027, max=51759, avg=25801.27, stdev=1683.65 00:28:44.601 clat percentiles (usec): 00:28:44.601 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.601 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.601 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27657], 00:28:44.601 | 99.00th=[28181], 99.50th=[28705], 99.90th=[51643], 99.95th=[51643], 00:28:44.601 | 99.99th=[51643] 00:28:44.601 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.21, stdev=72.65, samples=19 00:28:44.601 iops : min= 576, max= 640, avg=611.26, stdev=18.17, samples=19 00:28:44.601 lat (msec) : 20=0.26%, 50=99.48%, 100=0.26% 00:28:44.601 cpu : usr=97.10%, sys=1.47%, ctx=136, majf=0, minf=32 00:28:44.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.601 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.601 filename2: (groupid=0, jobs=1): err= 0: pid=2866639: Mon Jul 15 20:54:17 2024 00:28:44.602 read: IOPS=618, BW=2475KiB/s (2534kB/s)(24.2MiB/10004msec) 00:28:44.602 slat (nsec): min=5704, max=97755, avg=16736.64, stdev=13424.61 00:28:44.602 clat (usec): min=5727, max=58430, avg=25793.86, stdev=2832.30 00:28:44.602 lat (usec): min=5735, max=58449, avg=25810.60, stdev=2831.15 00:28:44.602 clat percentiles (usec): 00:28:44.602 | 1.00th=[20055], 5.00th=[21103], 10.00th=[22152], 20.00th=[25297], 00:28:44.602 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:28:44.602 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28443], 95.00th=[30016], 00:28:44.602 | 99.00th=[32637], 99.50th=[34341], 99.90th=[45351], 99.95th=[45351], 00:28:44.602 | 99.99th=[58459] 00:28:44.602 bw ( KiB/s): min= 2292, max= 2592, per=4.18%, avg=2464.21, stdev=75.48, samples=19 00:28:44.602 iops : min= 573, max= 648, avg=616.05, stdev=18.87, samples=19 00:28:44.602 lat (msec) : 10=0.16%, 20=0.79%, 50=99.01%, 100=0.03% 00:28:44.602 cpu : usr=97.35%, sys=1.36%, ctx=59, majf=0, minf=25 00:28:44.602 IO depths : 1=0.1%, 2=0.1%, 4=1.4%, 8=81.1%, 16=17.4%, 32=0.0%, >=64=0.0% 00:28:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 complete : 0=0.0%, 4=89.2%, 8=9.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 issued rwts: total=6190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.602 filename2: (groupid=0, jobs=1): err= 0: pid=2866640: Mon Jul 15 20:54:17 2024 00:28:44.602 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10004msec) 00:28:44.602 slat (nsec): min=6874, max=77387, avg=34152.79, stdev=13954.58 00:28:44.602 clat (usec): min=22920, max=40405, avg=25834.53, stdev=1219.60 00:28:44.602 lat (usec): min=22948, max=40429, avg=25868.68, stdev=1218.80 00:28:44.602 clat percentiles (usec): 00:28:44.602 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:28:44.602 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.602 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:28:44.602 | 99.00th=[28181], 99.50th=[29230], 99.90th=[40109], 99.95th=[40109], 00:28:44.602 | 99.99th=[40633] 00:28:44.602 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=84.20, samples=19 00:28:44.602 iops : min= 576, max= 640, avg=611.37, stdev=21.05, samples=19 00:28:44.602 lat (msec) : 50=100.00% 00:28:44.602 cpu : usr=98.96%, sys=0.62%, ctx=34, majf=0, minf=26 00:28:44.602 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.602 filename2: (groupid=0, jobs=1): err= 0: pid=2866641: Mon Jul 15 20:54:17 2024 00:28:44.602 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10004msec) 00:28:44.602 slat (nsec): min=6809, max=84898, avg=24422.37, stdev=18479.52 00:28:44.602 clat (usec): min=22979, max=40381, avg=25938.81, stdev=1221.93 00:28:44.602 lat (usec): min=23039, max=40407, avg=25963.23, stdev=1219.31 00:28:44.602 clat percentiles (usec): 00:28:44.602 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:28:44.602 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:28:44.602 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:28:44.602 | 99.00th=[28443], 99.50th=[29492], 99.90th=[40109], 99.95th=[40109], 00:28:44.602 | 99.99th=[40633] 00:28:44.602 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.47, stdev=84.20, samples=19 00:28:44.602 iops : min= 576, max= 640, avg=611.37, stdev=21.05, samples=19 00:28:44.602 lat (msec) : 50=100.00% 00:28:44.602 cpu : usr=98.73%, sys=0.88%, ctx=21, majf=0, minf=41 00:28:44.602 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.602 filename2: (groupid=0, jobs=1): err= 0: pid=2866642: Mon Jul 15 20:54:17 2024 00:28:44.602 read: IOPS=612, BW=2449KiB/s (2508kB/s)(23.9MiB/10008msec) 00:28:44.602 slat (nsec): min=7070, max=95566, avg=46440.07, stdev=19569.96 00:28:44.602 clat (usec): min=16970, max=43914, avg=25755.38, stdev=1369.19 00:28:44.602 lat (usec): min=16979, max=43934, avg=25801.82, stdev=1366.14 00:28:44.602 clat percentiles (usec): 00:28:44.602 | 1.00th=[24511], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:44.602 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.602 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:28:44.602 | 99.00th=[28181], 99.50th=[29230], 99.90th=[43779], 99.95th=[43779], 00:28:44.602 | 99.99th=[43779] 00:28:44.602 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.21, stdev=84.26, samples=19 00:28:44.602 iops : min= 576, max= 640, avg=611.26, stdev=21.07, samples=19 00:28:44.602 lat (msec) : 20=0.03%, 50=99.97% 00:28:44.602 cpu : usr=96.93%, sys=1.64%, ctx=214, majf=0, minf=22 00:28:44.602 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.602 filename2: (groupid=0, jobs=1): err= 0: pid=2866643: Mon Jul 15 20:54:17 2024 00:28:44.602 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10004msec) 00:28:44.602 slat (nsec): min=5527, max=91105, avg=44407.27, stdev=21617.03 00:28:44.602 clat (usec): min=5563, max=45685, avg=25611.27, stdev=1861.97 00:28:44.602 lat (usec): min=5598, max=45698, avg=25655.68, stdev=1863.22 00:28:44.602 clat percentiles (usec): 00:28:44.602 | 1.00th=[24249], 5.00th=[24773], 10.00th=[24773], 20.00th=[24773], 00:28:44.602 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:28:44.602 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27395], 95.00th=[27657], 00:28:44.602 | 99.00th=[28181], 99.50th=[29230], 99.90th=[45876], 99.95th=[45876], 00:28:44.602 | 99.99th=[45876] 00:28:44.602 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.68, stdev=94.07, samples=19 00:28:44.602 iops : min= 576, max= 640, avg=611.42, stdev=23.52, samples=19 00:28:44.602 lat (msec) : 10=0.26%, 20=0.26%, 50=99.48% 00:28:44.602 cpu : usr=96.69%, sys=1.77%, ctx=137, majf=0, minf=28 00:28:44.602 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.602 filename2: (groupid=0, jobs=1): err= 0: pid=2866644: Mon Jul 15 20:54:17 2024 00:28:44.602 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10004msec) 00:28:44.602 slat (nsec): min=6556, max=84236, avg=38857.15, stdev=15777.12 00:28:44.602 clat (usec): min=6215, max=45674, avg=25708.81, stdev=1974.73 00:28:44.602 lat (usec): min=6229, max=45688, avg=25747.67, stdev=1975.33 00:28:44.602 clat percentiles (usec): 00:28:44.602 | 1.00th=[20055], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:28:44.602 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.602 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:28:44.602 | 99.00th=[29230], 99.50th=[31851], 99.90th=[45876], 99.95th=[45876], 00:28:44.602 | 99.99th=[45876] 00:28:44.602 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.68, stdev=94.07, samples=19 00:28:44.602 iops : min= 576, max= 640, avg=611.42, stdev=23.52, samples=19 00:28:44.602 lat (msec) : 10=0.26%, 20=0.72%, 50=99.02% 00:28:44.602 cpu : usr=98.85%, sys=0.76%, ctx=25, majf=0, minf=25 00:28:44.602 IO depths : 1=5.7%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.602 filename2: (groupid=0, jobs=1): err= 0: pid=2866645: Mon Jul 15 20:54:17 2024 00:28:44.602 read: IOPS=618, BW=2476KiB/s (2535kB/s)(24.2MiB/10004msec) 00:28:44.602 slat (nsec): min=6309, max=84089, avg=30526.10, stdev=18362.79 00:28:44.602 clat (usec): min=3306, max=29744, avg=25607.51, stdev=2328.77 00:28:44.602 lat (usec): min=3321, max=29770, avg=25638.03, stdev=2329.80 00:28:44.602 clat percentiles (usec): 00:28:44.602 | 1.00th=[ 9503], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:28:44.602 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:28:44.602 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:28:44.602 | 99.00th=[28181], 99.50th=[28443], 99.90th=[29492], 99.95th=[29754], 00:28:44.602 | 99.99th=[29754] 00:28:44.602 bw ( KiB/s): min= 2304, max= 2944, per=4.20%, avg=2472.11, stdev=141.65, samples=19 00:28:44.602 iops : min= 576, max= 736, avg=618.00, stdev=35.40, samples=19 00:28:44.602 lat (msec) : 4=0.26%, 10=0.78%, 20=0.26%, 50=98.71% 00:28:44.602 cpu : usr=98.44%, sys=0.87%, ctx=27, majf=0, minf=32 00:28:44.602 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.602 filename2: (groupid=0, jobs=1): err= 0: pid=2866646: Mon Jul 15 20:54:17 2024 00:28:44.602 read: IOPS=614, BW=2456KiB/s (2515kB/s)(24.0MiB/10005msec) 00:28:44.602 slat (nsec): min=6825, max=75896, avg=37586.91, stdev=14108.19 00:28:44.602 clat (usec): min=10268, max=35153, avg=25736.01, stdev=1361.16 00:28:44.602 lat (usec): min=10280, max=35187, avg=25773.60, stdev=1362.31 00:28:44.602 clat percentiles (usec): 00:28:44.602 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:28:44.602 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:44.602 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27395], 95.00th=[27657], 00:28:44.602 | 99.00th=[28181], 99.50th=[28705], 99.90th=[34866], 99.95th=[34866], 00:28:44.602 | 99.99th=[35390] 00:28:44.602 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2445.16, stdev=83.76, samples=19 00:28:44.602 iops : min= 576, max= 640, avg=611.26, stdev=20.90, samples=19 00:28:44.602 lat (msec) : 20=0.52%, 50=99.48% 00:28:44.602 cpu : usr=99.02%, sys=0.61%, ctx=13, majf=0, minf=31 00:28:44.602 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.602 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:44.602 00:28:44.602 Run status group 0 (all jobs): 00:28:44.602 READ: bw=57.5MiB/s (60.3MB/s), 2448KiB/s-2476KiB/s (2507kB/s-2535kB/s), io=576MiB (604MB), run=10001-10015msec 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.602 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 bdev_null0 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 [2024-07-15 20:54:17.854710] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 bdev_null1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:44.603 { 00:28:44.603 "params": { 00:28:44.603 "name": "Nvme$subsystem", 00:28:44.603 "trtype": "$TEST_TRANSPORT", 00:28:44.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.603 "adrfam": "ipv4", 00:28:44.603 "trsvcid": "$NVMF_PORT", 00:28:44.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.603 "hdgst": ${hdgst:-false}, 00:28:44.603 "ddgst": ${ddgst:-false} 00:28:44.603 }, 00:28:44.603 "method": "bdev_nvme_attach_controller" 00:28:44.603 } 00:28:44.603 EOF 00:28:44.603 )") 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:44.603 { 00:28:44.603 "params": { 00:28:44.603 "name": "Nvme$subsystem", 00:28:44.603 "trtype": "$TEST_TRANSPORT", 00:28:44.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.603 "adrfam": "ipv4", 00:28:44.603 "trsvcid": "$NVMF_PORT", 00:28:44.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.603 "hdgst": ${hdgst:-false}, 00:28:44.603 "ddgst": ${ddgst:-false} 00:28:44.603 }, 00:28:44.603 "method": "bdev_nvme_attach_controller" 00:28:44.603 } 00:28:44.603 EOF 00:28:44.603 )") 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:44.603 "params": { 00:28:44.603 "name": "Nvme0", 00:28:44.603 "trtype": "tcp", 00:28:44.603 "traddr": "10.0.0.2", 00:28:44.603 "adrfam": "ipv4", 00:28:44.603 "trsvcid": "4420", 00:28:44.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:44.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:44.603 "hdgst": false, 00:28:44.603 "ddgst": false 00:28:44.603 }, 00:28:44.603 "method": "bdev_nvme_attach_controller" 00:28:44.603 },{ 00:28:44.603 "params": { 00:28:44.603 "name": "Nvme1", 00:28:44.603 "trtype": "tcp", 00:28:44.603 "traddr": "10.0.0.2", 00:28:44.603 "adrfam": "ipv4", 00:28:44.603 "trsvcid": "4420", 00:28:44.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.603 "hdgst": false, 00:28:44.603 "ddgst": false 00:28:44.603 }, 00:28:44.603 "method": "bdev_nvme_attach_controller" 00:28:44.603 }' 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:44.603 20:54:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:44.603 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:44.603 ... 00:28:44.603 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:44.603 ... 00:28:44.603 fio-3.35 00:28:44.603 Starting 4 threads 00:28:44.603 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.905 00:28:49.905 filename0: (groupid=0, jobs=1): err= 0: pid=2868991: Mon Jul 15 20:54:24 2024 00:28:49.905 read: IOPS=2621, BW=20.5MiB/s (21.5MB/s)(102MiB/5003msec) 00:28:49.905 slat (nsec): min=6163, max=59143, avg=10813.64, stdev=5604.23 00:28:49.905 clat (usec): min=1227, max=43611, avg=3019.42, stdev=1497.77 00:28:49.905 lat (usec): min=1235, max=43643, avg=3030.23, stdev=1497.75 00:28:49.905 clat percentiles (usec): 00:28:49.905 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2638], 00:28:49.905 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:28:49.905 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3490], 95.00th=[ 4113], 00:28:49.905 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[43254], 99.95th=[43779], 00:28:49.905 | 99.99th=[43779] 00:28:49.905 bw ( KiB/s): min=18320, max=23248, per=25.05%, avg=20976.00, stdev=1322.60, samples=10 00:28:49.905 iops : min= 2290, max= 2906, avg=2622.00, stdev=165.33, samples=10 00:28:49.905 lat (msec) : 2=1.04%, 4=93.07%, 10=5.76%, 50=0.12% 00:28:49.905 cpu : usr=96.96%, sys=2.70%, ctx=6, majf=0, minf=0 00:28:49.905 IO depths : 1=0.3%, 2=3.0%, 4=69.6%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.905 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.905 issued rwts: total=13117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.905 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:49.905 filename0: (groupid=0, jobs=1): err= 0: pid=2868992: Mon Jul 15 20:54:24 2024 00:28:49.905 read: IOPS=2656, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:28:49.905 slat (nsec): min=6148, max=55609, avg=10696.38, stdev=5616.87 00:28:49.905 clat (usec): min=990, max=5849, avg=2980.86, stdev=580.02 00:28:49.905 lat (usec): min=997, max=5874, avg=2991.56, stdev=579.86 00:28:49.905 clat percentiles (usec): 00:28:49.905 | 1.00th=[ 1074], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2671], 00:28:49.905 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:49.905 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3654], 95.00th=[ 4293], 00:28:49.905 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5342], 00:28:49.905 | 99.99th=[ 5866] 00:28:49.905 bw ( KiB/s): min=19344, max=24208, per=25.49%, avg=21340.44, stdev=1345.76, samples=9 00:28:49.905 iops : min= 2418, max= 3026, avg=2667.56, stdev=168.22, samples=9 00:28:49.905 lat (usec) : 1000=0.04% 00:28:49.905 lat (msec) : 2=2.90%, 4=89.62%, 10=7.44% 00:28:49.905 cpu : usr=97.02%, sys=2.64%, ctx=10, majf=0, minf=0 00:28:49.905 IO depths : 1=0.1%, 2=1.7%, 4=69.2%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.905 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.905 issued rwts: total=13290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.905 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:49.905 filename1: (groupid=0, jobs=1): err= 0: pid=2868993: Mon Jul 15 20:54:24 2024 00:28:49.905 read: IOPS=2626, BW=20.5MiB/s (21.5MB/s)(103MiB/5003msec) 00:28:49.905 slat (nsec): min=6203, max=62805, avg=12962.58, stdev=9227.38 00:28:49.905 clat (usec): min=1115, max=42815, avg=3008.36, stdev=1082.99 00:28:49.905 lat (usec): min=1149, max=42840, avg=3021.33, stdev=1082.93 00:28:49.905 clat percentiles (usec): 00:28:49.905 | 1.00th=[ 2008], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2704], 00:28:49.905 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:49.905 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3523], 95.00th=[ 3982], 00:28:49.905 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 5407], 99.95th=[42730], 00:28:49.905 | 99.99th=[42730] 00:28:49.905 bw ( KiB/s): min=20112, max=21824, per=25.22%, avg=21114.67, stdev=598.13, samples=9 00:28:49.905 iops : min= 2514, max= 2728, avg=2639.33, stdev=74.77, samples=9 00:28:49.905 lat (msec) : 2=0.96%, 4=94.17%, 10=4.81%, 50=0.06% 00:28:49.905 cpu : usr=97.22%, sys=2.44%, ctx=9, majf=0, minf=0 00:28:49.905 IO depths : 1=0.2%, 2=2.0%, 4=69.2%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.905 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.905 issued rwts: total=13141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.905 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:49.905 filename1: (groupid=0, jobs=1): err= 0: pid=2868994: Mon Jul 15 20:54:24 2024 00:28:49.905 read: IOPS=2561, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:28:49.905 slat (nsec): min=6112, max=59136, avg=11312.90, stdev=6047.16 00:28:49.905 clat (usec): min=1052, max=43785, avg=3091.26, stdev=1103.93 00:28:49.905 lat (usec): min=1058, max=43811, avg=3102.57, stdev=1103.96 00:28:49.905 clat percentiles (usec): 00:28:49.905 | 1.00th=[ 2114], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2802], 00:28:49.905 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:28:49.905 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3949], 00:28:49.905 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5800], 99.95th=[43779], 00:28:49.905 | 99.99th=[43779] 00:28:49.905 bw ( KiB/s): min=19760, max=21680, per=24.54%, avg=20541.33, stdev=642.72, samples=9 00:28:49.905 iops : min= 2470, max= 2710, avg=2567.67, stdev=80.34, samples=9 00:28:49.905 lat (msec) : 2=0.68%, 4=94.60%, 10=4.66%, 50=0.06% 00:28:49.905 cpu : usr=94.40%, sys=4.02%, ctx=284, majf=0, minf=9 00:28:49.905 IO depths : 1=0.3%, 2=2.2%, 4=69.6%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.905 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.905 issued rwts: total=12809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.905 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:49.905 00:28:49.905 Run status group 0 (all jobs): 00:28:49.905 READ: bw=81.8MiB/s (85.7MB/s), 20.0MiB/s-20.8MiB/s (21.0MB/s-21.8MB/s), io=409MiB (429MB), run=5001-5003msec 00:28:49.905 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:49.905 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:49.905 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:49.905 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:49.905 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.906 00:28:49.906 real 0m24.538s 00:28:49.906 user 4m49.875s 00:28:49.906 sys 0m4.872s 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 ************************************ 00:28:49.906 END TEST fio_dif_rand_params 00:28:49.906 ************************************ 00:28:49.906 20:54:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:49.906 20:54:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:49.906 20:54:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:49.906 20:54:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 ************************************ 00:28:49.906 START TEST fio_dif_digest 00:28:49.906 ************************************ 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 bdev_null0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.906 [2024-07-15 20:54:24.373896] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:49.906 { 00:28:49.906 "params": { 00:28:49.906 "name": "Nvme$subsystem", 00:28:49.906 "trtype": "$TEST_TRANSPORT", 00:28:49.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.906 "adrfam": "ipv4", 00:28:49.906 "trsvcid": "$NVMF_PORT", 00:28:49.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.906 "hdgst": ${hdgst:-false}, 00:28:49.906 "ddgst": ${ddgst:-false} 00:28:49.906 }, 00:28:49.906 "method": "bdev_nvme_attach_controller" 00:28:49.906 } 00:28:49.906 EOF 00:28:49.906 )") 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:49.906 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:50.192 "params": { 00:28:50.192 "name": "Nvme0", 00:28:50.192 "trtype": "tcp", 00:28:50.192 "traddr": "10.0.0.2", 00:28:50.192 "adrfam": "ipv4", 00:28:50.192 "trsvcid": "4420", 00:28:50.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.192 "hdgst": true, 00:28:50.192 "ddgst": true 00:28:50.192 }, 00:28:50.192 "method": "bdev_nvme_attach_controller" 00:28:50.192 }' 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:50.192 20:54:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.457 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:50.457 ... 00:28:50.457 fio-3.35 00:28:50.457 Starting 3 threads 00:28:50.457 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.649 00:29:02.649 filename0: (groupid=0, jobs=1): err= 0: pid=2870065: Mon Jul 15 20:54:35 2024 00:29:02.649 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(333MiB/10045msec) 00:29:02.649 slat (nsec): min=6474, max=48856, avg=18082.89, stdev=8385.89 00:29:02.649 clat (usec): min=7851, max=47663, avg=11285.79, stdev=1267.76 00:29:02.649 lat (usec): min=7878, max=47677, avg=11303.87, stdev=1267.87 00:29:02.649 clat percentiles (usec): 00:29:02.649 | 1.00th=[ 9503], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:29:02.649 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:29:02.649 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:29:02.649 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14353], 99.95th=[45351], 00:29:02.649 | 99.99th=[47449] 00:29:02.649 bw ( KiB/s): min=33024, max=34816, per=32.69%, avg=34035.20, stdev=515.19, samples=20 00:29:02.649 iops : min= 258, max= 272, avg=265.90, stdev= 4.02, samples=20 00:29:02.649 lat (msec) : 10=5.94%, 20=93.99%, 50=0.08% 00:29:02.649 cpu : usr=95.91%, sys=3.75%, ctx=24, majf=0, minf=185 00:29:02.649 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.649 issued rwts: total=2661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.649 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:02.649 filename0: (groupid=0, jobs=1): err= 0: pid=2870066: Mon Jul 15 20:54:35 2024 00:29:02.649 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(352MiB/10046msec) 00:29:02.649 slat (nsec): min=6554, max=62985, avg=23139.71, stdev=7824.91 00:29:02.649 clat (usec): min=8082, max=52292, avg=10667.27, stdev=1318.65 00:29:02.649 lat (usec): min=8095, max=52322, avg=10690.41, stdev=1318.52 00:29:02.649 clat percentiles (usec): 00:29:02.649 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:29:02.649 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:29:02.649 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:29:02.649 | 99.00th=[12518], 99.50th=[12780], 99.90th=[14353], 99.95th=[49546], 00:29:02.649 | 99.99th=[52167] 00:29:02.649 bw ( KiB/s): min=35072, max=36864, per=34.57%, avg=35993.60, stdev=640.13, samples=20 00:29:02.649 iops : min= 274, max= 288, avg=281.20, stdev= 5.00, samples=20 00:29:02.649 lat (msec) : 10=19.26%, 20=80.67%, 50=0.04%, 100=0.04% 00:29:02.649 cpu : usr=96.23%, sys=3.43%, ctx=24, majf=0, minf=135 00:29:02.649 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.649 issued rwts: total=2814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:02.650 filename0: (groupid=0, jobs=1): err= 0: pid=2870067: Mon Jul 15 20:54:35 2024 00:29:02.650 read: IOPS=268, BW=33.5MiB/s (35.2MB/s)(337MiB/10045msec) 00:29:02.650 slat (nsec): min=6427, max=49905, avg=19186.18, stdev=8243.55 00:29:02.650 clat (usec): min=8642, max=47844, avg=11122.67, stdev=1063.53 00:29:02.650 lat (usec): min=8670, max=47869, avg=11141.86, stdev=1063.77 00:29:02.650 clat percentiles (usec): 00:29:02.650 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:29:02.650 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:29:02.650 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:29:02.650 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13829], 99.95th=[14091], 00:29:02.650 | 99.99th=[47973] 00:29:02.650 bw ( KiB/s): min=33469, max=35072, per=33.13%, avg=34492.65, stdev=469.03, samples=20 00:29:02.650 iops : min= 261, max= 274, avg=269.45, stdev= 3.72, samples=20 00:29:02.650 lat (msec) : 10=8.09%, 20=91.88%, 50=0.04% 00:29:02.650 cpu : usr=95.62%, sys=4.04%, ctx=19, majf=0, minf=106 00:29:02.650 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.650 issued rwts: total=2696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:02.650 00:29:02.650 Run status group 0 (all jobs): 00:29:02.650 READ: bw=102MiB/s (107MB/s), 33.1MiB/s-35.0MiB/s (34.7MB/s-36.7MB/s), io=1021MiB (1071MB), run=10045-10046msec 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.650 00:29:02.650 real 0m11.039s 00:29:02.650 user 0m35.236s 00:29:02.650 sys 0m1.395s 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:02.650 20:54:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:02.650 ************************************ 00:29:02.650 END TEST fio_dif_digest 00:29:02.650 ************************************ 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:02.650 20:54:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:02.650 20:54:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:02.650 rmmod nvme_tcp 00:29:02.650 rmmod nvme_fabrics 00:29:02.650 rmmod nvme_keyring 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2860922 ']' 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2860922 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2860922 ']' 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2860922 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2860922 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2860922' 00:29:02.650 killing process with pid 2860922 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2860922 00:29:02.650 20:54:35 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2860922 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:02.650 20:54:35 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:04.023 Waiting for block devices as requested 00:29:04.023 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:04.023 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:04.023 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:04.023 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:04.023 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:04.281 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:04.281 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:04.281 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:04.281 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:04.539 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:04.539 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:04.539 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:04.797 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:04.797 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:04.797 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:04.797 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:05.055 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:05.055 20:54:39 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:05.055 20:54:39 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:05.055 20:54:39 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:05.055 20:54:39 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:05.055 20:54:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.055 20:54:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:05.055 20:54:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.956 20:54:41 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:06.956 00:29:06.956 real 1m12.962s 00:29:06.956 user 7m7.583s 00:29:06.956 sys 0m18.218s 00:29:06.956 20:54:41 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.956 20:54:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:06.956 ************************************ 00:29:06.956 END TEST nvmf_dif 00:29:06.956 ************************************ 00:29:07.214 20:54:41 -- common/autotest_common.sh@1142 -- # return 0 00:29:07.214 20:54:41 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:07.214 20:54:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:07.214 20:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.214 20:54:41 -- common/autotest_common.sh@10 -- # set +x 00:29:07.214 ************************************ 00:29:07.214 START TEST nvmf_abort_qd_sizes 00:29:07.214 ************************************ 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:07.214 * Looking for test storage... 00:29:07.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:07.214 20:54:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:12.470 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:12.471 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:12.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:12.471 Found net devices under 0000:86:00.0: cvl_0_0 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:12.471 Found net devices under 0000:86:00.1: cvl_0_1 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:12.471 20:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:12.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:29:12.471 00:29:12.471 --- 10.0.0.2 ping statistics --- 00:29:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.471 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:29:12.471 00:29:12.471 --- 10.0.0.1 ping statistics --- 00:29:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.471 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:12.471 20:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:14.372 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:14.372 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:14.373 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:15.309 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:15.309 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.309 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:15.309 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:15.309 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.309 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:15.309 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2877767 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2877767 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2877767 ']' 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:15.628 20:54:49 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:15.628 [2024-07-15 20:54:49.848778] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:29:15.629 [2024-07-15 20:54:49.848821] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.629 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.629 [2024-07-15 20:54:49.903559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:15.629 [2024-07-15 20:54:49.984604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.629 [2024-07-15 20:54:49.984640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.629 [2024-07-15 20:54:49.984648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.629 [2024-07-15 20:54:49.984654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.629 [2024-07-15 20:54:49.984660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.629 [2024-07-15 20:54:49.984704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.629 [2024-07-15 20:54:49.984722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.629 [2024-07-15 20:54:49.984808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.629 [2024-07-15 20:54:49.984809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.200 20:54:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:16.200 20:54:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:16.200 20:54:50 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:16.200 20:54:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:16.200 20:54:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:16.459 20:54:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:16.459 ************************************ 00:29:16.459 START TEST spdk_target_abort 00:29:16.459 ************************************ 00:29:16.459 20:54:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:16.459 20:54:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:16.459 20:54:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:16.459 20:54:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.459 20:54:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.740 spdk_targetn1 00:29:19.740 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.740 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.740 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.740 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.741 [2024-07-15 20:54:53.572840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.741 [2024-07-15 20:54:53.605605] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:19.741 20:54:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:19.741 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.033 Initializing NVMe Controllers 00:29:23.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:23.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:23.033 Initialization complete. Launching workers. 00:29:23.033 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14863, failed: 0 00:29:23.033 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1497, failed to submit 13366 00:29:23.033 success 803, unsuccess 694, failed 0 00:29:23.033 20:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:23.033 20:54:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:23.033 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.323 Initializing NVMe Controllers 00:29:26.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:26.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:26.323 Initialization complete. Launching workers. 00:29:26.323 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8761, failed: 0 00:29:26.323 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 7539 00:29:26.323 success 340, unsuccess 882, failed 0 00:29:26.323 20:55:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:26.324 20:55:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:26.324 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.860 Initializing NVMe Controllers 00:29:28.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:28.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:28.860 Initialization complete. Launching workers. 00:29:28.860 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38014, failed: 0 00:29:28.860 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2785, failed to submit 35229 00:29:28.860 success 586, unsuccess 2199, failed 0 00:29:28.860 20:55:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:28.860 20:55:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.860 20:55:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:28.860 20:55:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.860 20:55:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:28.860 20:55:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.860 20:55:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2877767 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2877767 ']' 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2877767 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2877767 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2877767' 00:29:30.236 killing process with pid 2877767 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2877767 00:29:30.236 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2877767 00:29:30.493 00:29:30.493 real 0m14.113s 00:29:30.493 user 0m56.236s 00:29:30.493 sys 0m2.294s 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.493 ************************************ 00:29:30.493 END TEST spdk_target_abort 00:29:30.493 ************************************ 00:29:30.493 20:55:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:30.493 20:55:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:30.493 20:55:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:30.493 20:55:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.493 20:55:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:30.493 ************************************ 00:29:30.493 START TEST kernel_target_abort 00:29:30.493 ************************************ 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:30.493 20:55:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:33.024 Waiting for block devices as requested 00:29:33.024 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:33.024 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:33.283 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:33.283 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:33.283 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:33.283 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:33.541 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:33.541 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:33.541 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:33.541 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:33.799 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:33.799 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:33.799 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:34.058 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:34.058 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:34.058 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:34.316 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:34.316 No valid GPT data, bailing 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:34.316 00:29:34.316 Discovery Log Number of Records 2, Generation counter 2 00:29:34.316 =====Discovery Log Entry 0====== 00:29:34.316 trtype: tcp 00:29:34.316 adrfam: ipv4 00:29:34.316 subtype: current discovery subsystem 00:29:34.316 treq: not specified, sq flow control disable supported 00:29:34.316 portid: 1 00:29:34.316 trsvcid: 4420 00:29:34.316 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:34.316 traddr: 10.0.0.1 00:29:34.316 eflags: none 00:29:34.316 sectype: none 00:29:34.316 =====Discovery Log Entry 1====== 00:29:34.316 trtype: tcp 00:29:34.316 adrfam: ipv4 00:29:34.316 subtype: nvme subsystem 00:29:34.316 treq: not specified, sq flow control disable supported 00:29:34.316 portid: 1 00:29:34.316 trsvcid: 4420 00:29:34.316 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:34.316 traddr: 10.0.0.1 00:29:34.316 eflags: none 00:29:34.316 sectype: none 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:34.316 20:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:34.574 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.856 Initializing NVMe Controllers 00:29:37.856 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:37.856 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:37.856 Initialization complete. Launching workers. 00:29:37.856 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77033, failed: 0 00:29:37.856 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 77033, failed to submit 0 00:29:37.856 success 0, unsuccess 77033, failed 0 00:29:37.856 20:55:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:37.856 20:55:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:37.856 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.496 Initializing NVMe Controllers 00:29:40.496 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:40.496 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:40.496 Initialization complete. Launching workers. 00:29:40.496 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 128716, failed: 0 00:29:40.496 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32398, failed to submit 96318 00:29:40.496 success 0, unsuccess 32398, failed 0 00:29:40.496 20:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:40.496 20:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:40.756 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.043 Initializing NVMe Controllers 00:29:44.043 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:44.043 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:44.043 Initialization complete. Launching workers. 00:29:44.043 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 123844, failed: 0 00:29:44.043 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30986, failed to submit 92858 00:29:44.043 success 0, unsuccess 30986, failed 0 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:44.043 20:55:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:46.578 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:46.578 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:47.147 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:47.406 00:29:47.406 real 0m16.724s 00:29:47.406 user 0m7.870s 00:29:47.406 sys 0m4.820s 00:29:47.406 20:55:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:47.406 20:55:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.406 ************************************ 00:29:47.406 END TEST kernel_target_abort 00:29:47.406 ************************************ 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:47.407 rmmod nvme_tcp 00:29:47.407 rmmod nvme_fabrics 00:29:47.407 rmmod nvme_keyring 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2877767 ']' 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2877767 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2877767 ']' 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2877767 00:29:47.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2877767) - No such process 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2877767 is not found' 00:29:47.407 Process with pid 2877767 is not found 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:47.407 20:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:49.963 Waiting for block devices as requested 00:29:49.963 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:49.963 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:49.963 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:49.963 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:49.963 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:49.963 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:49.963 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:50.222 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:50.222 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:50.222 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:50.222 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:50.480 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:50.480 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:50.480 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:50.738 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:50.738 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:50.738 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:50.738 20:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:50.738 20:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:50.738 20:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:50.738 20:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:50.738 20:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.738 20:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:50.738 20:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.273 20:55:27 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:53.273 00:29:53.273 real 0m45.756s 00:29:53.273 user 1m7.467s 00:29:53.273 sys 0m14.282s 00:29:53.273 20:55:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:53.273 20:55:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:53.273 ************************************ 00:29:53.273 END TEST nvmf_abort_qd_sizes 00:29:53.273 ************************************ 00:29:53.273 20:55:27 -- common/autotest_common.sh@1142 -- # return 0 00:29:53.273 20:55:27 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:53.273 20:55:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:53.273 20:55:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.273 20:55:27 -- common/autotest_common.sh@10 -- # set +x 00:29:53.273 ************************************ 00:29:53.273 START TEST keyring_file 00:29:53.273 ************************************ 00:29:53.273 20:55:27 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:53.273 * Looking for test storage... 00:29:53.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:53.273 20:55:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.273 20:55:27 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.273 20:55:27 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.273 20:55:27 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.273 20:55:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.273 20:55:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.273 20:55:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.273 20:55:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:53.273 20:55:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:53.273 20:55:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:53.273 20:55:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:53.273 20:55:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:53.273 20:55:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:53.273 20:55:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:53.273 20:55:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lszSwUGRjN 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:53.273 20:55:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:53.273 20:55:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lszSwUGRjN 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lszSwUGRjN 00:29:53.274 20:55:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lszSwUGRjN 00:29:53.274 20:55:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sv5FpDbh3D 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:53.274 20:55:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:53.274 20:55:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:53.274 20:55:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:53.274 20:55:27 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:53.274 20:55:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:53.274 20:55:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sv5FpDbh3D 00:29:53.274 20:55:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sv5FpDbh3D 00:29:53.274 20:55:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sv5FpDbh3D 00:29:53.274 20:55:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=2886435 00:29:53.274 20:55:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2886435 00:29:53.274 20:55:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:53.274 20:55:27 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2886435 ']' 00:29:53.274 20:55:27 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.274 20:55:27 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:53.274 20:55:27 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.274 20:55:27 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:53.274 20:55:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:53.274 [2024-07-15 20:55:27.597606] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:29:53.274 [2024-07-15 20:55:27.597655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886435 ] 00:29:53.274 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.274 [2024-07-15 20:55:27.652489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.274 [2024-07-15 20:55:27.732697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:54.211 20:55:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:54.211 [2024-07-15 20:55:28.400032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.211 null0 00:29:54.211 [2024-07-15 20:55:28.432087] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:54.211 [2024-07-15 20:55:28.432401] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:54.211 [2024-07-15 20:55:28.440100] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.211 20:55:28 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:54.211 [2024-07-15 20:55:28.452126] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:54.211 request: 00:29:54.211 { 00:29:54.211 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:54.211 "secure_channel": false, 00:29:54.211 "listen_address": { 00:29:54.211 "trtype": "tcp", 00:29:54.211 "traddr": "127.0.0.1", 00:29:54.211 "trsvcid": "4420" 00:29:54.211 }, 00:29:54.211 "method": "nvmf_subsystem_add_listener", 00:29:54.211 "req_id": 1 00:29:54.211 } 00:29:54.211 Got JSON-RPC error response 00:29:54.211 response: 00:29:54.211 { 00:29:54.211 "code": -32602, 00:29:54.211 "message": "Invalid parameters" 00:29:54.211 } 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:54.211 20:55:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:54.212 20:55:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:54.212 20:55:28 keyring_file -- keyring/file.sh@46 -- # bperfpid=2886592 00:29:54.212 20:55:28 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2886592 /var/tmp/bperf.sock 00:29:54.212 20:55:28 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2886592 ']' 00:29:54.212 20:55:28 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:54.212 20:55:28 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.212 20:55:28 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:54.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:54.212 20:55:28 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.212 20:55:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:54.212 20:55:28 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:54.212 [2024-07-15 20:55:28.503367] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:29:54.212 [2024-07-15 20:55:28.503410] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886592 ] 00:29:54.212 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.212 [2024-07-15 20:55:28.556553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.212 [2024-07-15 20:55:28.629262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.149 20:55:29 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.149 20:55:29 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:55.149 20:55:29 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lszSwUGRjN 00:29:55.149 20:55:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lszSwUGRjN 00:29:55.149 20:55:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sv5FpDbh3D 00:29:55.149 20:55:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sv5FpDbh3D 00:29:55.407 20:55:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:55.407 20:55:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:55.407 20:55:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:55.407 20:55:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:55.407 20:55:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:55.407 20:55:29 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.lszSwUGRjN == \/\t\m\p\/\t\m\p\.\l\s\z\S\w\U\G\R\j\N ]] 00:29:55.407 20:55:29 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:55.407 20:55:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:55.407 20:55:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:55.407 20:55:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:55.407 20:55:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:55.667 20:55:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.sv5FpDbh3D == \/\t\m\p\/\t\m\p\.\s\v\5\F\p\D\b\h\3\D ]] 00:29:55.667 20:55:30 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:55.667 20:55:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:55.667 20:55:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:55.667 20:55:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:55.667 20:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:55.667 20:55:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:55.925 20:55:30 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:55.925 20:55:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:55.925 20:55:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:55.925 20:55:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:55.925 20:55:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:55.925 20:55:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:55.925 20:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:55.925 20:55:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:55.926 20:55:30 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:55.926 20:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:56.184 [2024-07-15 20:55:30.535659] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:56.184 nvme0n1 00:29:56.184 20:55:30 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:56.184 20:55:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:56.184 20:55:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:56.184 20:55:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:56.184 20:55:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:56.184 20:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:56.443 20:55:30 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:56.443 20:55:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:56.443 20:55:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:56.443 20:55:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:56.443 20:55:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:56.443 20:55:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:56.443 20:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:56.702 20:55:30 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:56.702 20:55:30 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:56.702 Running I/O for 1 seconds... 00:29:57.640 00:29:57.640 Latency(us) 00:29:57.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.640 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:57.640 nvme0n1 : 1.01 13590.02 53.09 0.00 0.00 9387.10 2763.91 13962.02 00:29:57.640 =================================================================================================================== 00:29:57.640 Total : 13590.02 53.09 0.00 0.00 9387.10 2763.91 13962.02 00:29:57.640 0 00:29:57.640 20:55:32 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:57.640 20:55:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:57.899 20:55:32 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:57.899 20:55:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:57.899 20:55:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:57.899 20:55:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:57.899 20:55:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:57.899 20:55:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:58.180 20:55:32 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:58.180 20:55:32 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:58.180 20:55:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:58.180 20:55:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:58.180 20:55:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:58.180 20:55:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:58.180 20:55:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:58.180 20:55:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:58.181 20:55:32 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:58.181 20:55:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:58.181 20:55:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:58.181 20:55:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:58.181 20:55:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:58.181 20:55:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:58.181 20:55:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:58.181 20:55:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:58.181 20:55:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:58.440 [2024-07-15 20:55:32.818425] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:58.440 [2024-07-15 20:55:32.818743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f34820 (107): Transport endpoint is not connected 00:29:58.440 [2024-07-15 20:55:32.819737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f34820 (9): Bad file descriptor 00:29:58.440 [2024-07-15 20:55:32.820738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:58.440 [2024-07-15 20:55:32.820748] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:58.440 [2024-07-15 20:55:32.820755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:58.440 request: 00:29:58.440 { 00:29:58.440 "name": "nvme0", 00:29:58.440 "trtype": "tcp", 00:29:58.440 "traddr": "127.0.0.1", 00:29:58.440 "adrfam": "ipv4", 00:29:58.440 "trsvcid": "4420", 00:29:58.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:58.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:58.440 "prchk_reftag": false, 00:29:58.440 "prchk_guard": false, 00:29:58.440 "hdgst": false, 00:29:58.440 "ddgst": false, 00:29:58.440 "psk": "key1", 00:29:58.440 "method": "bdev_nvme_attach_controller", 00:29:58.440 "req_id": 1 00:29:58.440 } 00:29:58.440 Got JSON-RPC error response 00:29:58.440 response: 00:29:58.440 { 00:29:58.440 "code": -5, 00:29:58.440 "message": "Input/output error" 00:29:58.440 } 00:29:58.440 20:55:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:58.440 20:55:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:58.440 20:55:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:58.440 20:55:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:58.440 20:55:32 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:58.440 20:55:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:58.440 20:55:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:58.440 20:55:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:58.440 20:55:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:58.440 20:55:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:58.699 20:55:33 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:58.699 20:55:33 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:58.699 20:55:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:58.699 20:55:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:58.699 20:55:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:58.699 20:55:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:58.699 20:55:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:58.958 20:55:33 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:58.958 20:55:33 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:58.958 20:55:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:58.958 20:55:33 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:58.958 20:55:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:59.216 20:55:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:59.216 20:55:33 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:59.216 20:55:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:59.475 20:55:33 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:59.475 20:55:33 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.lszSwUGRjN 00:29:59.475 20:55:33 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lszSwUGRjN 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lszSwUGRjN 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lszSwUGRjN 00:29:59.475 20:55:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lszSwUGRjN 00:29:59.475 [2024-07-15 20:55:33.883849] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lszSwUGRjN': 0100660 00:29:59.475 [2024-07-15 20:55:33.883871] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:59.475 request: 00:29:59.475 { 00:29:59.475 "name": "key0", 00:29:59.475 "path": "/tmp/tmp.lszSwUGRjN", 00:29:59.475 "method": "keyring_file_add_key", 00:29:59.475 "req_id": 1 00:29:59.475 } 00:29:59.475 Got JSON-RPC error response 00:29:59.475 response: 00:29:59.475 { 00:29:59.475 "code": -1, 00:29:59.475 "message": "Operation not permitted" 00:29:59.475 } 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:59.475 20:55:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:59.475 20:55:33 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.lszSwUGRjN 00:29:59.475 20:55:33 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lszSwUGRjN 00:29:59.475 20:55:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lszSwUGRjN 00:29:59.733 20:55:34 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.lszSwUGRjN 00:29:59.733 20:55:34 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:59.733 20:55:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:59.733 20:55:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:59.733 20:55:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:59.733 20:55:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:59.733 20:55:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:59.993 20:55:34 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:59.993 20:55:34 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:59.993 20:55:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:59.993 [2024-07-15 20:55:34.413278] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lszSwUGRjN': No such file or directory 00:29:59.993 [2024-07-15 20:55:34.413298] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:59.993 [2024-07-15 20:55:34.413318] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:59.993 [2024-07-15 20:55:34.413324] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:59.993 [2024-07-15 20:55:34.413330] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:59.993 request: 00:29:59.993 { 00:29:59.993 "name": "nvme0", 00:29:59.993 "trtype": "tcp", 00:29:59.993 "traddr": "127.0.0.1", 00:29:59.993 "adrfam": "ipv4", 00:29:59.993 "trsvcid": "4420", 00:29:59.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.993 "prchk_reftag": false, 00:29:59.993 "prchk_guard": false, 00:29:59.993 "hdgst": false, 00:29:59.993 "ddgst": false, 00:29:59.993 "psk": "key0", 00:29:59.993 "method": "bdev_nvme_attach_controller", 00:29:59.993 "req_id": 1 00:29:59.993 } 00:29:59.993 Got JSON-RPC error response 00:29:59.993 response: 00:29:59.993 { 00:29:59.993 "code": -19, 00:29:59.993 "message": "No such device" 00:29:59.993 } 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:59.993 20:55:34 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:59.993 20:55:34 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:59.993 20:55:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:00.252 20:55:34 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JCztPr7DQb 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:00.252 20:55:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:00.252 20:55:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:00.252 20:55:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:00.252 20:55:34 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:00.252 20:55:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:00.252 20:55:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JCztPr7DQb 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JCztPr7DQb 00:30:00.252 20:55:34 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.JCztPr7DQb 00:30:00.252 20:55:34 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JCztPr7DQb 00:30:00.252 20:55:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JCztPr7DQb 00:30:00.511 20:55:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:00.511 20:55:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:00.771 nvme0n1 00:30:00.771 20:55:35 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:00.771 20:55:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:00.771 20:55:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:00.771 20:55:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.771 20:55:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:00.771 20:55:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.771 20:55:35 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:00.771 20:55:35 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:00.771 20:55:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:01.030 20:55:35 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:01.030 20:55:35 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:01.030 20:55:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:01.030 20:55:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:01.030 20:55:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:01.289 20:55:35 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:01.289 20:55:35 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:01.289 20:55:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:01.289 20:55:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:01.289 20:55:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:01.289 20:55:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:01.289 20:55:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:01.289 20:55:35 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:01.289 20:55:35 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:01.289 20:55:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:01.549 20:55:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:01.549 20:55:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:01.549 20:55:35 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:01.808 20:55:36 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:01.808 20:55:36 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JCztPr7DQb 00:30:01.808 20:55:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JCztPr7DQb 00:30:02.066 20:55:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sv5FpDbh3D 00:30:02.066 20:55:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sv5FpDbh3D 00:30:02.066 20:55:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:02.066 20:55:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:02.325 nvme0n1 00:30:02.325 20:55:36 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:02.325 20:55:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:02.585 20:55:36 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:02.585 "subsystems": [ 00:30:02.585 { 00:30:02.585 "subsystem": "keyring", 00:30:02.585 "config": [ 00:30:02.585 { 00:30:02.585 "method": "keyring_file_add_key", 00:30:02.585 "params": { 00:30:02.585 "name": "key0", 00:30:02.585 "path": "/tmp/tmp.JCztPr7DQb" 00:30:02.585 } 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "method": "keyring_file_add_key", 00:30:02.585 "params": { 00:30:02.585 "name": "key1", 00:30:02.585 "path": "/tmp/tmp.sv5FpDbh3D" 00:30:02.585 } 00:30:02.585 } 00:30:02.585 ] 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "subsystem": "iobuf", 00:30:02.585 "config": [ 00:30:02.585 { 00:30:02.585 "method": "iobuf_set_options", 00:30:02.585 "params": { 00:30:02.585 "small_pool_count": 8192, 00:30:02.585 "large_pool_count": 1024, 00:30:02.585 "small_bufsize": 8192, 00:30:02.585 "large_bufsize": 135168 00:30:02.585 } 00:30:02.585 } 00:30:02.585 ] 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "subsystem": "sock", 00:30:02.585 "config": [ 00:30:02.585 { 00:30:02.585 "method": "sock_set_default_impl", 00:30:02.585 "params": { 00:30:02.585 "impl_name": "posix" 00:30:02.585 } 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "method": "sock_impl_set_options", 00:30:02.585 "params": { 00:30:02.585 "impl_name": "ssl", 00:30:02.585 "recv_buf_size": 4096, 00:30:02.585 "send_buf_size": 4096, 00:30:02.585 "enable_recv_pipe": true, 00:30:02.585 "enable_quickack": false, 00:30:02.585 "enable_placement_id": 0, 00:30:02.585 "enable_zerocopy_send_server": true, 00:30:02.585 "enable_zerocopy_send_client": false, 00:30:02.585 "zerocopy_threshold": 0, 00:30:02.585 "tls_version": 0, 00:30:02.585 "enable_ktls": false 00:30:02.585 } 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "method": "sock_impl_set_options", 00:30:02.585 "params": { 00:30:02.585 "impl_name": "posix", 00:30:02.585 "recv_buf_size": 2097152, 00:30:02.585 "send_buf_size": 2097152, 00:30:02.585 "enable_recv_pipe": true, 00:30:02.585 "enable_quickack": false, 00:30:02.585 "enable_placement_id": 0, 00:30:02.585 "enable_zerocopy_send_server": true, 00:30:02.585 "enable_zerocopy_send_client": false, 00:30:02.585 "zerocopy_threshold": 0, 00:30:02.585 "tls_version": 0, 00:30:02.585 "enable_ktls": false 00:30:02.585 } 00:30:02.585 } 00:30:02.585 ] 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "subsystem": "vmd", 00:30:02.585 "config": [] 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "subsystem": "accel", 00:30:02.585 "config": [ 00:30:02.585 { 00:30:02.585 "method": "accel_set_options", 00:30:02.585 "params": { 00:30:02.585 "small_cache_size": 128, 00:30:02.585 "large_cache_size": 16, 00:30:02.585 "task_count": 2048, 00:30:02.585 "sequence_count": 2048, 00:30:02.585 "buf_count": 2048 00:30:02.585 } 00:30:02.585 } 00:30:02.585 ] 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "subsystem": "bdev", 00:30:02.585 "config": [ 00:30:02.585 { 00:30:02.585 "method": "bdev_set_options", 00:30:02.585 "params": { 00:30:02.585 "bdev_io_pool_size": 65535, 00:30:02.585 "bdev_io_cache_size": 256, 00:30:02.585 "bdev_auto_examine": true, 00:30:02.585 "iobuf_small_cache_size": 128, 00:30:02.585 "iobuf_large_cache_size": 16 00:30:02.585 } 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "method": "bdev_raid_set_options", 00:30:02.585 "params": { 00:30:02.585 "process_window_size_kb": 1024 00:30:02.585 } 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "method": "bdev_iscsi_set_options", 00:30:02.585 "params": { 00:30:02.585 "timeout_sec": 30 00:30:02.585 } 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "method": "bdev_nvme_set_options", 00:30:02.585 "params": { 00:30:02.585 "action_on_timeout": "none", 00:30:02.585 "timeout_us": 0, 00:30:02.585 "timeout_admin_us": 0, 00:30:02.585 "keep_alive_timeout_ms": 10000, 00:30:02.585 "arbitration_burst": 0, 00:30:02.585 "low_priority_weight": 0, 00:30:02.585 "medium_priority_weight": 0, 00:30:02.585 "high_priority_weight": 0, 00:30:02.585 "nvme_adminq_poll_period_us": 10000, 00:30:02.585 "nvme_ioq_poll_period_us": 0, 00:30:02.585 "io_queue_requests": 512, 00:30:02.585 "delay_cmd_submit": true, 00:30:02.585 "transport_retry_count": 4, 00:30:02.585 "bdev_retry_count": 3, 00:30:02.585 "transport_ack_timeout": 0, 00:30:02.585 "ctrlr_loss_timeout_sec": 0, 00:30:02.585 "reconnect_delay_sec": 0, 00:30:02.585 "fast_io_fail_timeout_sec": 0, 00:30:02.585 "disable_auto_failback": false, 00:30:02.585 "generate_uuids": false, 00:30:02.585 "transport_tos": 0, 00:30:02.585 "nvme_error_stat": false, 00:30:02.585 "rdma_srq_size": 0, 00:30:02.585 "io_path_stat": false, 00:30:02.585 "allow_accel_sequence": false, 00:30:02.585 "rdma_max_cq_size": 0, 00:30:02.585 "rdma_cm_event_timeout_ms": 0, 00:30:02.585 "dhchap_digests": [ 00:30:02.585 "sha256", 00:30:02.585 "sha384", 00:30:02.585 "sha512" 00:30:02.585 ], 00:30:02.585 "dhchap_dhgroups": [ 00:30:02.585 "null", 00:30:02.585 "ffdhe2048", 00:30:02.585 "ffdhe3072", 00:30:02.585 "ffdhe4096", 00:30:02.585 "ffdhe6144", 00:30:02.585 "ffdhe8192" 00:30:02.585 ] 00:30:02.585 } 00:30:02.585 }, 00:30:02.585 { 00:30:02.585 "method": "bdev_nvme_attach_controller", 00:30:02.585 "params": { 00:30:02.585 "name": "nvme0", 00:30:02.585 "trtype": "TCP", 00:30:02.585 "adrfam": "IPv4", 00:30:02.585 "traddr": "127.0.0.1", 00:30:02.585 "trsvcid": "4420", 00:30:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:02.585 "prchk_reftag": false, 00:30:02.585 "prchk_guard": false, 00:30:02.586 "ctrlr_loss_timeout_sec": 0, 00:30:02.586 "reconnect_delay_sec": 0, 00:30:02.586 "fast_io_fail_timeout_sec": 0, 00:30:02.586 "psk": "key0", 00:30:02.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:02.586 "hdgst": false, 00:30:02.586 "ddgst": false 00:30:02.586 } 00:30:02.586 }, 00:30:02.586 { 00:30:02.586 "method": "bdev_nvme_set_hotplug", 00:30:02.586 "params": { 00:30:02.586 "period_us": 100000, 00:30:02.586 "enable": false 00:30:02.586 } 00:30:02.586 }, 00:30:02.586 { 00:30:02.586 "method": "bdev_wait_for_examine" 00:30:02.586 } 00:30:02.586 ] 00:30:02.586 }, 00:30:02.586 { 00:30:02.586 "subsystem": "nbd", 00:30:02.586 "config": [] 00:30:02.586 } 00:30:02.586 ] 00:30:02.586 }' 00:30:02.586 20:55:36 keyring_file -- keyring/file.sh@114 -- # killprocess 2886592 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2886592 ']' 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2886592 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886592 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886592' 00:30:02.586 killing process with pid 2886592 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@967 -- # kill 2886592 00:30:02.586 Received shutdown signal, test time was about 1.000000 seconds 00:30:02.586 00:30:02.586 Latency(us) 00:30:02.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.586 =================================================================================================================== 00:30:02.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.586 20:55:36 keyring_file -- common/autotest_common.sh@972 -- # wait 2886592 00:30:02.845 20:55:37 keyring_file -- keyring/file.sh@117 -- # bperfpid=2888114 00:30:02.846 20:55:37 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2888114 /var/tmp/bperf.sock 00:30:02.846 20:55:37 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2888114 ']' 00:30:02.846 20:55:37 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:02.846 20:55:37 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:02.846 20:55:37 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.846 20:55:37 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:02.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:02.846 20:55:37 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:02.846 "subsystems": [ 00:30:02.846 { 00:30:02.846 "subsystem": "keyring", 00:30:02.846 "config": [ 00:30:02.846 { 00:30:02.846 "method": "keyring_file_add_key", 00:30:02.846 "params": { 00:30:02.846 "name": "key0", 00:30:02.846 "path": "/tmp/tmp.JCztPr7DQb" 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "keyring_file_add_key", 00:30:02.846 "params": { 00:30:02.846 "name": "key1", 00:30:02.846 "path": "/tmp/tmp.sv5FpDbh3D" 00:30:02.846 } 00:30:02.846 } 00:30:02.846 ] 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "subsystem": "iobuf", 00:30:02.846 "config": [ 00:30:02.846 { 00:30:02.846 "method": "iobuf_set_options", 00:30:02.846 "params": { 00:30:02.846 "small_pool_count": 8192, 00:30:02.846 "large_pool_count": 1024, 00:30:02.846 "small_bufsize": 8192, 00:30:02.846 "large_bufsize": 135168 00:30:02.846 } 00:30:02.846 } 00:30:02.846 ] 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "subsystem": "sock", 00:30:02.846 "config": [ 00:30:02.846 { 00:30:02.846 "method": "sock_set_default_impl", 00:30:02.846 "params": { 00:30:02.846 "impl_name": "posix" 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "sock_impl_set_options", 00:30:02.846 "params": { 00:30:02.846 "impl_name": "ssl", 00:30:02.846 "recv_buf_size": 4096, 00:30:02.846 "send_buf_size": 4096, 00:30:02.846 "enable_recv_pipe": true, 00:30:02.846 "enable_quickack": false, 00:30:02.846 "enable_placement_id": 0, 00:30:02.846 "enable_zerocopy_send_server": true, 00:30:02.846 "enable_zerocopy_send_client": false, 00:30:02.846 "zerocopy_threshold": 0, 00:30:02.846 "tls_version": 0, 00:30:02.846 "enable_ktls": false 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "sock_impl_set_options", 00:30:02.846 "params": { 00:30:02.846 "impl_name": "posix", 00:30:02.846 "recv_buf_size": 2097152, 00:30:02.846 "send_buf_size": 2097152, 00:30:02.846 "enable_recv_pipe": true, 00:30:02.846 "enable_quickack": false, 00:30:02.846 "enable_placement_id": 0, 00:30:02.846 "enable_zerocopy_send_server": true, 00:30:02.846 "enable_zerocopy_send_client": false, 00:30:02.846 "zerocopy_threshold": 0, 00:30:02.846 "tls_version": 0, 00:30:02.846 "enable_ktls": false 00:30:02.846 } 00:30:02.846 } 00:30:02.846 ] 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "subsystem": "vmd", 00:30:02.846 "config": [] 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "subsystem": "accel", 00:30:02.846 "config": [ 00:30:02.846 { 00:30:02.846 "method": "accel_set_options", 00:30:02.846 "params": { 00:30:02.846 "small_cache_size": 128, 00:30:02.846 "large_cache_size": 16, 00:30:02.846 "task_count": 2048, 00:30:02.846 "sequence_count": 2048, 00:30:02.846 "buf_count": 2048 00:30:02.846 } 00:30:02.846 } 00:30:02.846 ] 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "subsystem": "bdev", 00:30:02.846 "config": [ 00:30:02.846 { 00:30:02.846 "method": "bdev_set_options", 00:30:02.846 "params": { 00:30:02.846 "bdev_io_pool_size": 65535, 00:30:02.846 "bdev_io_cache_size": 256, 00:30:02.846 "bdev_auto_examine": true, 00:30:02.846 "iobuf_small_cache_size": 128, 00:30:02.846 "iobuf_large_cache_size": 16 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "bdev_raid_set_options", 00:30:02.846 "params": { 00:30:02.846 "process_window_size_kb": 1024 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "bdev_iscsi_set_options", 00:30:02.846 "params": { 00:30:02.846 "timeout_sec": 30 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "bdev_nvme_set_options", 00:30:02.846 "params": { 00:30:02.846 "action_on_timeout": "none", 00:30:02.846 "timeout_us": 0, 00:30:02.846 "timeout_admin_us": 0, 00:30:02.846 "keep_alive_timeout_ms": 10000, 00:30:02.846 "arbitration_burst": 0, 00:30:02.846 "low_priority_weight": 0, 00:30:02.846 "medium_priority_weight": 0, 00:30:02.846 "high_priority_weight": 0, 00:30:02.846 "nvme_adminq_poll_period_us": 10000, 00:30:02.846 "nvme_ioq_poll_period_us": 0, 00:30:02.846 "io_queue_requests": 512, 00:30:02.846 "delay_cmd_submit": true, 00:30:02.846 "transport_retry_count": 4, 00:30:02.846 "bdev_retry_count": 3, 00:30:02.846 "transport_ack_timeout": 0, 00:30:02.846 "ctrlr_loss_timeout_sec": 0, 00:30:02.846 "reconnect_delay_sec": 0, 00:30:02.846 "fast_io_fail_timeout_sec": 0, 00:30:02.846 "disable_auto_failback": false, 00:30:02.846 "generate_uuids": false, 00:30:02.846 "transport_tos": 0, 00:30:02.846 "nvme_error_stat": false, 00:30:02.846 "rdma_srq_size": 0, 00:30:02.846 "io_path_stat": false, 00:30:02.846 "allow_accel_sequence": false, 00:30:02.846 "rdma_max_cq_size": 0, 00:30:02.846 "rdma_cm_event_timeout_ms": 0, 00:30:02.846 "dhchap_digests": [ 00:30:02.846 "sha256", 00:30:02.846 "sha384", 00:30:02.846 "sha512" 00:30:02.846 ], 00:30:02.846 "dhchap_dhgroups": [ 00:30:02.846 "null", 00:30:02.846 "ffdhe2048", 00:30:02.846 "ffdhe3072", 00:30:02.846 "ffdhe4096", 00:30:02.846 "ffdhe6144", 00:30:02.846 "ffdhe8192" 00:30:02.846 ] 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "bdev_nvme_attach_controller", 00:30:02.846 "params": { 00:30:02.846 "name": "nvme0", 00:30:02.846 "trtype": "TCP", 00:30:02.846 "adrfam": "IPv4", 00:30:02.846 "traddr": "127.0.0.1", 00:30:02.846 "trsvcid": "4420", 00:30:02.846 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:02.846 "prchk_reftag": false, 00:30:02.846 "prchk_guard": false, 00:30:02.846 "ctrlr_loss_timeout_sec": 0, 00:30:02.846 "reconnect_delay_sec": 0, 00:30:02.846 "fast_io_fail_timeout_sec": 0, 00:30:02.846 "psk": "key0", 00:30:02.846 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:02.846 "hdgst": false, 00:30:02.846 "ddgst": false 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "bdev_nvme_set_hotplug", 00:30:02.846 "params": { 00:30:02.846 "period_us": 100000, 00:30:02.846 "enable": false 00:30:02.846 } 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "method": "bdev_wait_for_examine" 00:30:02.846 } 00:30:02.846 ] 00:30:02.846 }, 00:30:02.846 { 00:30:02.846 "subsystem": "nbd", 00:30:02.846 "config": [] 00:30:02.846 } 00:30:02.846 ] 00:30:02.846 }' 00:30:02.846 20:55:37 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.846 20:55:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:02.846 [2024-07-15 20:55:37.221584] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:30:02.846 [2024-07-15 20:55:37.221629] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888114 ] 00:30:02.846 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.846 [2024-07-15 20:55:37.276287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.106 [2024-07-15 20:55:37.357712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.106 [2024-07-15 20:55:37.517119] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:03.675 20:55:38 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:03.675 20:55:38 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:03.675 20:55:38 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:03.675 20:55:38 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:03.675 20:55:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:03.934 20:55:38 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:03.934 20:55:38 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:03.934 20:55:38 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:03.934 20:55:38 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:03.934 20:55:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:04.193 20:55:38 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:04.193 20:55:38 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:04.193 20:55:38 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:04.193 20:55:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:04.452 20:55:38 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:04.452 20:55:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:04.452 20:55:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.JCztPr7DQb /tmp/tmp.sv5FpDbh3D 00:30:04.452 20:55:38 keyring_file -- keyring/file.sh@20 -- # killprocess 2888114 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2888114 ']' 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2888114 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2888114 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2888114' 00:30:04.452 killing process with pid 2888114 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@967 -- # kill 2888114 00:30:04.452 Received shutdown signal, test time was about 1.000000 seconds 00:30:04.452 00:30:04.452 Latency(us) 00:30:04.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.452 =================================================================================================================== 00:30:04.452 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:04.452 20:55:38 keyring_file -- common/autotest_common.sh@972 -- # wait 2888114 00:30:04.712 20:55:38 keyring_file -- keyring/file.sh@21 -- # killprocess 2886435 00:30:04.712 20:55:38 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2886435 ']' 00:30:04.712 20:55:38 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2886435 00:30:04.712 20:55:38 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:04.712 20:55:38 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.712 20:55:38 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886435 00:30:04.712 20:55:39 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:04.712 20:55:39 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:04.712 20:55:39 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886435' 00:30:04.712 killing process with pid 2886435 00:30:04.712 20:55:39 keyring_file -- common/autotest_common.sh@967 -- # kill 2886435 00:30:04.712 [2024-07-15 20:55:39.018248] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:04.712 20:55:39 keyring_file -- common/autotest_common.sh@972 -- # wait 2886435 00:30:04.972 00:30:04.972 real 0m12.005s 00:30:04.972 user 0m28.327s 00:30:04.972 sys 0m2.805s 00:30:04.972 20:55:39 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:04.972 20:55:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:04.972 ************************************ 00:30:04.972 END TEST keyring_file 00:30:04.972 ************************************ 00:30:04.972 20:55:39 -- common/autotest_common.sh@1142 -- # return 0 00:30:04.972 20:55:39 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:04.972 20:55:39 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:04.972 20:55:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:04.972 20:55:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.972 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:30:04.972 ************************************ 00:30:04.972 START TEST keyring_linux 00:30:04.972 ************************************ 00:30:04.972 20:55:39 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:05.232 * Looking for test storage... 00:30:05.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:05.232 20:55:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:05.232 20:55:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.232 20:55:39 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.232 20:55:39 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.232 20:55:39 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.232 20:55:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.232 20:55:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.232 20:55:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.232 20:55:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:05.232 20:55:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.232 20:55:39 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:05.233 /tmp/:spdk-test:key0 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:05.233 20:55:39 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:05.233 20:55:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:05.233 /tmp/:spdk-test:key1 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2888654 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2888654 00:30:05.233 20:55:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:05.233 20:55:39 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2888654 ']' 00:30:05.233 20:55:39 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.233 20:55:39 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.233 20:55:39 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.233 20:55:39 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.233 20:55:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:05.233 [2024-07-15 20:55:39.636295] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:30:05.233 [2024-07-15 20:55:39.636340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888654 ] 00:30:05.233 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.233 [2024-07-15 20:55:39.687355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.492 [2024-07-15 20:55:39.760230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:06.059 20:55:40 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:06.059 [2024-07-15 20:55:40.438190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.059 null0 00:30:06.059 [2024-07-15 20:55:40.470239] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:06.059 [2024-07-15 20:55:40.470577] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.059 20:55:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:06.059 942320467 00:30:06.059 20:55:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:06.059 685784260 00:30:06.059 20:55:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2888719 00:30:06.059 20:55:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2888719 /var/tmp/bperf.sock 00:30:06.059 20:55:40 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2888719 ']' 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:06.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:06.059 20:55:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:06.059 [2024-07-15 20:55:40.540823] Starting SPDK v24.09-pre git sha1 f604975ba / DPDK 24.03.0 initialization... 00:30:06.059 [2024-07-15 20:55:40.540863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888719 ] 00:30:06.318 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.318 [2024-07-15 20:55:40.595114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.318 [2024-07-15 20:55:40.674095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.884 20:55:41 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:06.884 20:55:41 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:06.884 20:55:41 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:06.884 20:55:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:07.143 20:55:41 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:07.143 20:55:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:07.402 20:55:41 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:07.402 20:55:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:07.661 [2024-07-15 20:55:41.918802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:07.661 nvme0n1 00:30:07.661 20:55:42 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:07.661 20:55:42 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:07.661 20:55:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:07.661 20:55:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:07.661 20:55:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:07.661 20:55:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:07.921 20:55:42 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:07.921 20:55:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:07.921 20:55:42 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@25 -- # sn=942320467 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@26 -- # [[ 942320467 == \9\4\2\3\2\0\4\6\7 ]] 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 942320467 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:07.921 20:55:42 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:08.180 Running I/O for 1 seconds... 00:30:09.117 00:30:09.117 Latency(us) 00:30:09.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.117 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:09.117 nvme0n1 : 1.01 13726.80 53.62 0.00 0.00 9286.08 6724.56 16982.37 00:30:09.117 =================================================================================================================== 00:30:09.117 Total : 13726.80 53.62 0.00 0.00 9286.08 6724.56 16982.37 00:30:09.117 0 00:30:09.117 20:55:43 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:09.117 20:55:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:09.375 20:55:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:09.375 20:55:43 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:09.375 20:55:43 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:09.375 20:55:43 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:09.375 20:55:43 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:09.375 20:55:43 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:09.375 20:55:43 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:09.375 20:55:43 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:09.375 20:55:43 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:09.375 20:55:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:09.633 [2024-07-15 20:55:44.012353] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:09.633 [2024-07-15 20:55:44.013011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baefd0 (107): Transport endpoint is not connected 00:30:09.633 [2024-07-15 20:55:44.014005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baefd0 (9): Bad file descriptor 00:30:09.633 [2024-07-15 20:55:44.015007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:09.633 [2024-07-15 20:55:44.015016] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:09.633 [2024-07-15 20:55:44.015023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:09.633 request: 00:30:09.633 { 00:30:09.633 "name": "nvme0", 00:30:09.633 "trtype": "tcp", 00:30:09.633 "traddr": "127.0.0.1", 00:30:09.633 "adrfam": "ipv4", 00:30:09.633 "trsvcid": "4420", 00:30:09.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:09.633 "prchk_reftag": false, 00:30:09.633 "prchk_guard": false, 00:30:09.633 "hdgst": false, 00:30:09.633 "ddgst": false, 00:30:09.633 "psk": ":spdk-test:key1", 00:30:09.633 "method": "bdev_nvme_attach_controller", 00:30:09.633 "req_id": 1 00:30:09.633 } 00:30:09.633 Got JSON-RPC error response 00:30:09.633 response: 00:30:09.633 { 00:30:09.633 "code": -5, 00:30:09.633 "message": "Input/output error" 00:30:09.633 } 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@33 -- # sn=942320467 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 942320467 00:30:09.633 1 links removed 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@33 -- # sn=685784260 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 685784260 00:30:09.633 1 links removed 00:30:09.633 20:55:44 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2888719 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2888719 ']' 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2888719 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2888719 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2888719' 00:30:09.633 killing process with pid 2888719 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@967 -- # kill 2888719 00:30:09.633 Received shutdown signal, test time was about 1.000000 seconds 00:30:09.633 00:30:09.633 Latency(us) 00:30:09.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.633 =================================================================================================================== 00:30:09.633 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:09.633 20:55:44 keyring_linux -- common/autotest_common.sh@972 -- # wait 2888719 00:30:09.891 20:55:44 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2888654 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2888654 ']' 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2888654 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2888654 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2888654' 00:30:09.891 killing process with pid 2888654 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@967 -- # kill 2888654 00:30:09.891 20:55:44 keyring_linux -- common/autotest_common.sh@972 -- # wait 2888654 00:30:10.149 00:30:10.149 real 0m5.239s 00:30:10.149 user 0m9.230s 00:30:10.149 sys 0m1.455s 00:30:10.149 20:55:44 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:10.149 20:55:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:10.149 ************************************ 00:30:10.149 END TEST keyring_linux 00:30:10.149 ************************************ 00:30:10.408 20:55:44 -- common/autotest_common.sh@1142 -- # return 0 00:30:10.408 20:55:44 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:10.408 20:55:44 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:10.408 20:55:44 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:10.408 20:55:44 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:10.408 20:55:44 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:10.408 20:55:44 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:10.408 20:55:44 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:10.408 20:55:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:10.408 20:55:44 -- common/autotest_common.sh@10 -- # set +x 00:30:10.408 20:55:44 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:10.408 20:55:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:10.408 20:55:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:10.408 20:55:44 -- common/autotest_common.sh@10 -- # set +x 00:30:15.710 INFO: APP EXITING 00:30:15.710 INFO: killing all VMs 00:30:15.710 INFO: killing vhost app 00:30:15.710 INFO: EXIT DONE 00:30:17.089 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:17.089 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:17.089 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:17.089 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:17.089 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:17.089 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:17.348 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:19.884 Cleaning 00:30:19.884 Removing: /var/run/dpdk/spdk0/config 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:19.884 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:20.144 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:20.144 Removing: /var/run/dpdk/spdk1/config 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:20.144 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:20.144 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:20.144 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:20.144 Removing: /var/run/dpdk/spdk2/config 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:20.144 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:20.144 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:20.144 Removing: /var/run/dpdk/spdk3/config 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:20.144 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:20.144 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:20.144 Removing: /var/run/dpdk/spdk4/config 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:20.144 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:20.144 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:20.144 Removing: /dev/shm/bdev_svc_trace.1 00:30:20.144 Removing: /dev/shm/nvmf_trace.0 00:30:20.144 Removing: /dev/shm/spdk_tgt_trace.pid2504086 00:30:20.144 Removing: /var/run/dpdk/spdk0 00:30:20.144 Removing: /var/run/dpdk/spdk1 00:30:20.144 Removing: /var/run/dpdk/spdk2 00:30:20.144 Removing: /var/run/dpdk/spdk3 00:30:20.144 Removing: /var/run/dpdk/spdk4 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2501949 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2503019 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2504086 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2504817 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2505791 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2506156 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2507398 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2507616 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2507752 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2509304 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2510662 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2511016 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2511310 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2511609 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2511902 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2512154 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2512414 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2512694 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2513439 00:30:20.144 Removing: /var/run/dpdk/spdk_pid2516415 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2516680 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2516939 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2516957 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2517454 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2517678 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2518068 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2518187 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2518448 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2518678 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2518874 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2518952 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2519506 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2519753 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2520046 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2520310 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2520336 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2520554 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2520827 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2521105 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2521365 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2521612 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2521870 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2522117 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2522369 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2522617 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2522864 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2523115 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2523364 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2523611 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2523863 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2524112 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2524370 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2524616 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2524866 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2525123 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2525370 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2525622 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2525690 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2526088 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2529649 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2573508 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2577748 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2587740 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2593174 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2597072 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2597807 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2604204 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2610207 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2610215 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2611125 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2612039 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2612952 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2613555 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2613652 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2613883 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2613896 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2613910 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2614813 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2615725 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2616642 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2617326 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2617333 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2617569 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2618815 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2619797 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2628119 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2628583 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2632734 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2638470 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2641587 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2651980 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2660873 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2662699 00:30:20.404 Removing: /var/run/dpdk/spdk_pid2663622 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2680009 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2683773 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2708958 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2713451 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2715054 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2716892 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2717133 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2717371 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2717603 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2718120 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2719947 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2720942 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2721445 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2723684 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2724551 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2725280 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2729340 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2739291 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2743303 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2749363 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2750813 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2752137 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2756510 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2760664 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2768020 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2768024 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2772733 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2772962 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2773122 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2773512 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2773535 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2778442 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2778999 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2783325 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2786085 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2791470 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2797024 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2805572 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2812325 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2812361 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2830905 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2831600 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2832296 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2832863 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2833748 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2834442 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2835007 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2835619 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2839876 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2840113 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2846173 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2846446 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2848667 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2856173 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2856178 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2861195 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2863165 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2865127 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2866300 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2868708 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2869944 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2878447 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2878906 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2879494 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2881770 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2882308 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2882774 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2886435 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2886592 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2888114 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2888654 00:30:20.664 Removing: /var/run/dpdk/spdk_pid2888719 00:30:20.664 Clean 00:30:20.923 20:55:55 -- common/autotest_common.sh@1451 -- # return 0 00:30:20.923 20:55:55 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:20.923 20:55:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:20.923 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:30:20.923 20:55:55 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:20.923 20:55:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:20.923 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:30:20.923 20:55:55 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:20.923 20:55:55 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:20.923 20:55:55 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:20.923 20:55:55 -- spdk/autotest.sh@391 -- # hash lcov 00:30:20.923 20:55:55 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:20.923 20:55:55 -- spdk/autotest.sh@393 -- # hostname 00:30:20.923 20:55:55 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:21.182 geninfo: WARNING: invalid characters removed from testname! 00:30:43.121 20:56:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:43.699 20:56:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:45.603 20:56:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:47.507 20:56:21 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:48.885 20:56:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:50.823 20:56:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:52.731 20:56:26 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:52.731 20:56:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.731 20:56:26 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:52.731 20:56:26 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.731 20:56:26 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.731 20:56:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.731 20:56:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.731 20:56:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.731 20:56:26 -- paths/export.sh@5 -- $ export PATH 00:30:52.731 20:56:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.731 20:56:26 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:52.731 20:56:26 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:52.731 20:56:26 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069786.XXXXXX 00:30:52.731 20:56:26 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069786.oiCwbF 00:30:52.731 20:56:26 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:52.731 20:56:26 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:52.731 20:56:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:52.731 20:56:26 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:52.731 20:56:26 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:52.731 20:56:26 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:52.731 20:56:26 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:52.731 20:56:26 -- common/autotest_common.sh@10 -- $ set +x 00:30:52.731 20:56:26 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:52.731 20:56:26 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:52.731 20:56:26 -- pm/common@17 -- $ local monitor 00:30:52.731 20:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:52.731 20:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:52.731 20:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:52.731 20:56:26 -- pm/common@21 -- $ date +%s 00:30:52.731 20:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:52.731 20:56:26 -- pm/common@21 -- $ date +%s 00:30:52.731 20:56:26 -- pm/common@25 -- $ sleep 1 00:30:52.731 20:56:26 -- pm/common@21 -- $ date +%s 00:30:52.731 20:56:26 -- pm/common@21 -- $ date +%s 00:30:52.731 20:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069786 00:30:52.731 20:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069786 00:30:52.731 20:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069786 00:30:52.731 20:56:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069786 00:30:52.731 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069786_collect-vmstat.pm.log 00:30:52.731 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069786_collect-cpu-temp.pm.log 00:30:52.731 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069786_collect-cpu-load.pm.log 00:30:52.731 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069786_collect-bmc-pm.bmc.pm.log 00:30:53.670 20:56:27 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:53.670 20:56:27 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:53.670 20:56:27 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:53.670 20:56:27 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:53.670 20:56:27 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:53.670 20:56:27 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:53.670 20:56:27 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:53.670 20:56:27 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:53.670 20:56:27 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:53.670 20:56:28 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:53.670 20:56:28 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:53.670 20:56:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:53.670 20:56:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:53.670 20:56:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:53.670 20:56:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:53.670 20:56:28 -- pm/common@44 -- $ pid=2898878 00:30:53.670 20:56:28 -- pm/common@50 -- $ kill -TERM 2898878 00:30:53.670 20:56:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:53.670 20:56:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:53.670 20:56:28 -- pm/common@44 -- $ pid=2898879 00:30:53.670 20:56:28 -- pm/common@50 -- $ kill -TERM 2898879 00:30:53.670 20:56:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:53.670 20:56:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:53.670 20:56:28 -- pm/common@44 -- $ pid=2898881 00:30:53.670 20:56:28 -- pm/common@50 -- $ kill -TERM 2898881 00:30:53.670 20:56:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:53.670 20:56:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:53.670 20:56:28 -- pm/common@44 -- $ pid=2898904 00:30:53.670 20:56:28 -- pm/common@50 -- $ sudo -E kill -TERM 2898904 00:30:53.670 + [[ -n 2398204 ]] 00:30:53.670 + sudo kill 2398204 00:30:53.680 [Pipeline] } 00:30:53.701 [Pipeline] // stage 00:30:53.707 [Pipeline] } 00:30:53.730 [Pipeline] // timeout 00:30:53.737 [Pipeline] } 00:30:53.759 [Pipeline] // catchError 00:30:53.764 [Pipeline] } 00:30:53.783 [Pipeline] // wrap 00:30:53.790 [Pipeline] } 00:30:53.810 [Pipeline] // catchError 00:30:53.821 [Pipeline] stage 00:30:53.824 [Pipeline] { (Epilogue) 00:30:53.840 [Pipeline] catchError 00:30:53.841 [Pipeline] { 00:30:53.856 [Pipeline] echo 00:30:53.858 Cleanup processes 00:30:53.865 [Pipeline] sh 00:30:54.149 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:54.149 2899005 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:54.149 2899279 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:54.163 [Pipeline] sh 00:30:54.444 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:54.444 ++ grep -v 'sudo pgrep' 00:30:54.444 ++ awk '{print $1}' 00:30:54.444 + sudo kill -9 2899005 00:30:54.458 [Pipeline] sh 00:30:54.739 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:04.717 [Pipeline] sh 00:31:04.999 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:04.999 Artifacts sizes are good 00:31:05.013 [Pipeline] archiveArtifacts 00:31:05.020 Archiving artifacts 00:31:05.178 [Pipeline] sh 00:31:05.462 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:05.477 [Pipeline] cleanWs 00:31:05.488 [WS-CLEANUP] Deleting project workspace... 00:31:05.488 [WS-CLEANUP] Deferred wipeout is used... 00:31:05.496 [WS-CLEANUP] done 00:31:05.498 [Pipeline] } 00:31:05.521 [Pipeline] // catchError 00:31:05.535 [Pipeline] sh 00:31:05.820 + logger -p user.info -t JENKINS-CI 00:31:05.829 [Pipeline] } 00:31:05.842 [Pipeline] // stage 00:31:05.849 [Pipeline] } 00:31:05.867 [Pipeline] // node 00:31:05.874 [Pipeline] End of Pipeline 00:31:05.908 Finished: SUCCESS